Sample records for dose deposition kernel

  1. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  2. SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.

    PubMed

    Huang, J; Childress, N; Kry, S

    2012-06-01

    To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition () is 28.6° for water, 33.3° for lung, 36.0° for cortical bone, 44.6° for titanium, and 58.1° for silver, showing a dependence on the material in which the primary photon interacts. These high resolution and material-specific EDKs have implications for convolution/superposition dose calculations in heterogeneous patient geometries, especially at material interfaces. © 2012 American Association of Physicists in Medicine.

  3. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, J; Followill, D; Howell, R

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less

  4. Generation of a novel phase-space-based cylindrical dose kernel for IMRT optimization.

    PubMed

    Zhong, Hualiang; Chetty, Indrin J

    2012-05-01

    Improving dose calculation accuracy is crucial in intensity-modulated radiation therapy (IMRT). We have developed a method for generating a phase-space-based dose kernel for IMRT planning of lung cancer patients. Particle transport in the linear accelerator treatment head of a 21EX, 6 MV photon beam (Varian Medical Systems, Palo Alto, CA) was simulated using the EGSnrc/BEAMnrc code system. The phase space information was recorded under the secondary jaws. Each particle in the phase space file was associated with a beamlet whose index was calculated and saved in the particle's LATCH variable. The DOSXYZnrc code was modified to accumulate the energy deposited by each particle based on its beamlet index. Furthermore, the central axis of each beamlet was calculated from the orientation of all the particles in this beamlet. A cylinder was then defined around the central axis so that only the energy deposited within the cylinder was counted. A look-up table was established for each cylinder during the tallying process. The efficiency and accuracy of the cylindrical beamlet energy deposition approach was evaluated using a treatment plan developed on a simulated lung phantom. Profile and percentage depth doses computed in a water phantom for an open, square field size were within 1.5% of measurements. Dose optimized with the cylindrical dose kernel was found to be within 0.6% of that computed with the nontruncated 3D kernel. The cylindrical truncation reduced optimization time by approximately 80%. A method for generating a phase-space-based dose kernel, using a truncated cylinder for scoring dose, in beamlet-based optimization of lung treatment planning was developed and found to be in good agreement with the standard, nontruncated scoring approach. Compared to previous techniques, our method significantly reduces computational time and memory requirements, which may be useful for Monte-Carlo-based 4D IMRT or IMAT treatment planning.

  5. Effect of Acrocomia aculeata Kernel Oil on Adiposity in Type 2 Diabetic Rats.

    PubMed

    Nunes, Ângela A; Buccini, Danieli F; Jaques, Jeandre A S; Portugal, Luciane C; Guimarães, Rita C A; Favaro, Simone P; Caldas, Ruy A; Carvalho, Cristiano M E

    2018-03-01

    The macauba palm (Acrocomia aculeata) is native of tropical America and is found mostly in the Cerrados and Pantanal biomes. The fruits provide an oily pulp, rich in long chain fatty acids, and a kernel that encompass more than 50% of lipids rich in medium chain fatty acids (MCFA). Based on biochemical and nutritional evidences MCFA is readily catabolized and can reduce body fat accumulation. In this study, an animal model was employed to evaluate the effect of Acrocomia aculeata kernel oil (AKO) on the blood glucose level and the fatty acid deposit in the epididymal adipose tissue. The A. aculeata kernel oil obtained by cold pressing presented suitable quality as edible oil. Its fatty acid profile indicates high concentration of MCFA, mainly lauric, capric and caprilic. Type 2 diabetic rats fed with that kernel oil showed reduction of blood glucose level in comparison with the diabetic control group. Acrocomia aculeata kernel oil showed hypoglycemic effect. A small fraction of total dietary medium chain fatty acid was accumulated in the epididymal adipose tissue of rats fed with AKO at both low and high doses and caprilic acid did not deposit at all.

  6. Technical Note: Dose gradients and prescription isodose in orthovoltage stereotactic radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagerstrom, Jessica M., E-mail: fagerstrom@wisc.edu; Bender, Edward T.; Culberson, Wesley S.

    Purpose: The purpose of this work is to examine the trade-off between prescription isodose and dose gradients in orthovoltage stereotactic radiosurgery. Methods: Point energy deposition kernels (EDKs) describing photon and electron transport were calculated using Monte Carlo methods. EDKs were generated from 10  to 250 keV, in 10 keV increments. The EDKs were converted to pencil beam kernels and used to calculate dose profiles through isocenter from a 4π isotropic delivery from all angles of circularly collimated beams. Monoenergetic beams and an orthovoltage polyenergetic spectrum were analyzed. The dose gradient index (DGI) is the ratio of the 50% prescription isodosemore » volume to the 100% prescription isodose volume and represents a metric by which dose gradients in stereotactic radiosurgery (SRS) may be evaluated. Results: Using the 4π dose profiles calculated using pencil beam kernels, the relationship between DGI and prescription isodose was examined for circular cones ranging from 4 to 18 mm in diameter and monoenergetic photon beams with energies ranging from 20 to 250 keV. Values were found to exist for prescription isodose that optimize DGI. Conclusions: The relationship between DGI and prescription isodose was found to be dependent on both field size and energy. Examining this trade-off is an important consideration for designing optimal SRS systems.« less

  7. SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriya, S; Sato, M; Tachibana, H

    Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation runningmore » on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.« less

  8. Spatial frequency performance limitations of radiation dose optimization and beam positioning

    NASA Astrophysics Data System (ADS)

    Stewart, James M. P.; Stapleton, Shawn; Chaudary, Naz; Lindsay, Patricia E.; Jaffray, David A.

    2018-06-01

    The flexibility and sophistication of modern radiotherapy treatment planning and delivery methods have advanced techniques to improve the therapeutic ratio. Contemporary dose optimization and calculation algorithms facilitate radiotherapy plans which closely conform the three-dimensional dose distribution to the target, with beam shaping devices and image guided field targeting ensuring the fidelity and accuracy of treatment delivery. Ultimately, dose distribution conformity is limited by the maximum deliverable dose gradient; shallow dose gradients challenge techniques to deliver a tumoricidal radiation dose while minimizing dose to surrounding tissue. In this work, this ‘dose delivery resolution’ observation is rigorously formalized for a general dose delivery model based on the superposition of dose kernel primitives. It is proven that the spatial resolution of a delivered dose is bounded by the spatial frequency content of the underlying dose kernel, which in turn defines a lower bound in the minimization of a dose optimization objective function. In addition, it is shown that this optimization is penalized by a dose deposition strategy which enforces a constant relative phase (or constant spacing) between individual radiation beams. These results are further refined to provide a direct, analytic method to estimate the dose distribution arising from the minimization of such an optimization function. The efficacy of the overall framework is demonstrated on an image guided small animal microirradiator for a set of two-dimensional hypoxia guided dose prescriptions.

  9. Magnetic field influences on the lateral dose response functions of photon-beam detectors: MC study of wall-less water-filled detectors with various densities.

    PubMed

    Looe, Hui Khee; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2017-06-21

    The distortion of detector reading profiles across photon beams in the presence of magnetic fields is a developing subject of clinical photon-beam dosimetry. The underlying modification by the Lorentz force of a detector's lateral dose response function-the convolution kernel transforming the true cross-beam dose profile in water into the detector reading profile-is here studied for the first time. The three basic convolution kernels, the photon fluence response function, the dose deposition kernel, and the lateral dose response function, of wall-less cylindrical detectors filled with water of low, normal and enhanced density are shown by Monte Carlo simulation to be distorted in the prevailing direction of the Lorentz force. The asymmetric shape changes of these convolution kernels in a water medium and in magnetic fields of up to 1.5 T are confined to the lower millimetre range, and they depend on the photon beam quality, the magnetic flux density and the detector's density. The impact of this distortion on detector reading profiles is demonstrated using a narrow photon beam profile. For clinical applications it appears as favourable that the magnetic flux density dependent distortion of the lateral dose response function, as far as secondary electron transport is concerned, vanishes in the case of water-equivalent detectors of normal water density. By means of secondary electron history backtracing, the spatial distribution of the photon interactions giving rise either directly to secondary electrons or to scattered photons further downstream producing secondary electrons which contribute to the detector's signal, and their lateral shift due to the Lorentz force is elucidated. Electron history backtracing also serves to illustrate the correct treatment of the influences of the Lorentz force in the EGSnrc Monte Carlo code applied in this study.

  10. Tumour control probability derived from dose distribution in homogeneous and heterogeneous models: assuming similar pharmacokinetics, 125Sn-177Lu is superior to 90Y-177Lu in peptide receptor radiotherapy

    NASA Astrophysics Data System (ADS)

    Walrand, Stephan; Hanin, François-Xavier; Pauwels, Stanislas; Jamar, François

    2012-07-01

    Clinical trials on 177Lu-90Y therapy used empirical activity ratios. Radionuclides (RN) with larger beta maximal range could favourably replace 90Y. Our aim is to provide RN dose-deposition kernels and to compare the tumour control probability (TCP) of RN combinations. Dose kernels were derived by integration of the mono-energetic beta-ray dose distributions (computed using Monte Carlo) weighted by their respective beta spectrum. Nine homogeneous spherical tumours (1-25 mm in diameter) and four spherical tumours including a lattice of cold, but alive, spheres (1, 3, 5, 7 mm in diameter) were modelled. The TCP for 93Y, 90Y and 125Sn in combination with 177Lu in variable proportions (that kept constant the renal cortex biological effective dose) were derived by 3D dose kernel convolution. For a mean tumour-absorbed dose of 180 Gy, 2 mm homogeneous tumours and tumours including 3 mm diameter cold alive spheres were both well controlled (TCP > 0.9) using a 75-25% combination of 177Lu and 90Y activity. However, 125Sn-177Lu achieved a significantly better result by controlling 1 mm-homogeneous tumour simultaneously with tumours including 5 mm diameter cold alive spheres. Clinical trials using RN combinations should use RN proportions tuned to the patient dosimetry. 125Sn production and its coupling to somatostatin analogue appear feasible. Assuming similar pharmacokinetics 125Sn is the best RN for combination with 177Lu in peptide receptor radiotherapy justifying pharmacokinetics studies in rodent of 125Sn-labelled somatostatin analogues.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Liu, B; Liang, B

    Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less

  12. Effect of ultra-low doses, ASIR and MBIR on density and noise levels of MDCT images of dental implant sites.

    PubMed

    Widmann, Gerlig; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Al-Ekrish, Asma'a A

    2017-05-01

    Differences in noise and density values in MDCT images obtained using ultra-low doses with FBP, ASIR, and MBIR may possibly affect implant site density analysis. The aim of this study was to compare density and noise measurements recorded from dental implant sites using ultra-low doses combined with FBP, ASIR, and MBIR. Cadavers were scanned using a standard protocol and four low-dose protocols. Scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Density (mean Hounsfield units [HUs]) of alveolar bone and noise levels (mean standard deviation of HUs) was recorded from all datasets and measurements were compared by paired t tests and two-way ANOVA with repeated measures. Significant differences in density and noise were found between the reference dose/FBP protocol and almost all test combinations. Maximum mean differences in HU were 178.35 (bone kernel) and 273.74 (standard kernel), and in noise, were 243.73 (bone kernel) and 153.88 (standard kernel). Decreasing radiation dose increased density and noise regardless of reconstruction technique and kernel. The effect of reconstruction technique on density and noise depends on the reconstruction kernel used. • Ultra-low-dose MDCT protocols allowed more than 90 % reductions in dose. • Decreasing the dose generally increased density and noise. • Effect of IRT on density and noise varies with reconstruction kernel. • Accuracy of low-dose protocols for interpretation of bony anatomy not known. • Effect of low doses on accuracy of computer-aided design models unknown.

  13. Approaches to reducing photon dose calculation errors near metal implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Jessie Y.; Followill, David S.; Howell, Reb

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less

  14. TH-C-BRD-04: Beam Modeling and Validation with Triple and Double Gaussian Dose Kernel for Spot Scanning Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Takayanagi, T; Fujii, Y

    2014-06-15

    Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less

  15. Evaluation of the influence of double and triple Gaussian proton kernel models on accuracy of dose calculations for spot scanning technique.

    PubMed

    Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki

    2016-03-01

    The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.

  16. The effect of CT technical factors on quantification of lung fissure integrity

    NASA Astrophysics Data System (ADS)

    Chong, D.; Brown, M. S.; Ochs, R.; Abtin, F.; Brown, M.; Ordookhani, A.; Shaw, G.; Kim, H. J.; Gjertson, D.; Goldin, J. G.

    2009-02-01

    A new emphysema treatment uses endobronchial valves to perform lobar volume reduction. The degree of fissure completeness may predict treatment efficacy. This study investigated the behavior of a semiautomated algorithm for quantifying lung fissure integrity in CT with respect to reconstruction kernel and dose. Raw CT data was obtained for six asymptomatic patients from a high-risk population for lung cancer. The patients were scanned on either a Siemens Sensation 16 or 64, using a low-dose protocol of 120 kVp, 25 mAs. Images were reconstructed using kernels ranging from smooth to sharp (B10f, B30f, B50f, B70f). Research software was used to simulate an even lower-dose acquisition of 15 mAs, and images were generated at the same kernels resulting in 8 series per patient. The left major fissure was manually contoured axially at regular intervals, yielding 37 contours across all patients. These contours were read into an image analysis and pattern classification system which computed a Fissure Integrity Score (FIS) for each kernel and dose. FIS values were analyzed using a mixed-effects model with kernel and dose as fixed effects and patient as random effect to test for difference due to kernel and dose. Analysis revealed no difference in FIS between the smooth kernels (B10f, B30f) nor between sharp kernels (B50f, B70f), but there was a significant difference between the sharp and smooth groups (p = 0.020). There was no significant difference in FIS between the two low-dose reconstructions (p = 0.882). Using a cutoff of 90%, the number of incomplete fissures increased from 5 to 10 when the imaging protocol changed from B50f to B30f. Reconstruction kernel has a significant effect on quantification of fissure integrity in CT. This has potential implications when selecting patients for endobronchial valve therapy.

  17. SU-E-CAMPUS-I-06: Y90 PET/CT for the Instantaneous Determination of Both Target and Non-Target Absorbed Doses Following Hepatic Radioembolization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasciak, A; Kao, J

    2014-06-15

    Purpose The process of converting Yttrium-90 (Y90) PET/CT images into 3D absorbed dose maps will be explained. The simple methods presented will allow the medical physicst to analyze Y90 PET images following radioembolization and determine the absorbed dose to tumor, normal liver parenchyma and other areas of interest, without application of Monte-Carlo radiation transport or dose-point-kernel (DPK) convolution. Methods Absorbed dose can be computed from Y90 PET/CT images based on the premise that radioembolization is a permanent implant with a constant relative activity distribution after infusion. Many Y90 PET/CT publications have used DPK convolution to obtain 3D absorbed dose maps.more » However, this method requires specialized software limiting clinical utility. The Local Deposition method, an alternative to DPK convolution, can be used to obtain absorbed dose and requires no additional computer processing. Pixel values from regions of interest drawn on Y90 PET/CT images can be converted to absorbed dose (Gy) by multiplication with a scalar constant. Results There is evidence that suggests the Local Deposition method may actually be more accurate than DPK convolution and it has been successfully used in a recent Y90 PET/CT publication. We have analytically compared dose-volume-histograms (DVH) for phantom hot-spheres to determine the difference between the DPK and Local Deposition methods, as a function of PET scanner point-spread-function for Y90. We have found that for PET/CT systems with a FWHM greater than 3.0 mm when imaging Y90, the Local Deposition Method provides a more accurate representation of DVH, regardless of target size than DPK convolution. Conclusion Using the Local Deposition Method, post-radioembolization Y90 PET/CT images can be transformed into 3D absorbed dose maps of the liver. An interventional radiologist or a Medical Physicist can perform this transformation in a clinical setting, allowing for rapid prediction of treatment efficacy by comparison to published tumoricidal thresholds.« less

  18. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.

  19. Absorbed dose kernel and self-shielding calculations for a novel radiopaque glass microsphere for transarterial radioembolization.

    PubMed

    Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair

    2018-02-01

    Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the microspheres based on weighted activities. The shapes of the absorbed dose kernels are dominated at short times postactivation by the contributions of 70 Ga and 72 Ga. Following decay of the short-lived contaminants, the absorbed dose kernel is effectively that of 90 Y. After approximately 1000 h postactivation, the contributions of 85 Sr and 89 Sr become increasingly dominant, though the absorbed dose-rate around the beads drops by roughly four orders of magnitude. The introduction of high atomic number elements for the purpose of increasing radiopacity necessarily leads to the production of radionuclides other than 90 Y in the microspheres. Most of the radionuclides in this study are short-lived and are likely not of any significant concern for this therapeutic agent. The presence of small quantities of longer lived radionuclides will change the shape of the absorbed dose kernel around a microsphere at long time points postadministration when activity levels are significantly reduced. © 2017 American Association of Physicists in Medicine.

  20. Investigating the Impact of Aerosol Deposition on Snow Melt over the Greenland Ice Sheet Using a New Kernel

    NASA Astrophysics Data System (ADS)

    Li, Y.; Flanner, M.

    2017-12-01

    Accelerating surface melt on the Greenland Ice Sheet (GrIS) has led to a doubling of Greenland's contribution to global sea level rise during recent decades. The darkening effect due to black carbon (BC), dust, and other light absorbing impurities (LAI) enhances snow melt by boosting its absorption of solar energy. It is therefore important for coupled aerosol-climate and ice sheet models to include snow darkening effects from LAI, and yet most do not. In this study, we develop an aerosol deposition—snow melt kernel based on the Community Earth System Model (CESM) to investigate changes in melt flux due to variations in the amount and timing of aerosol deposition on the GrIS. The Community Land Model (CLM) component of CESM is driven with a large range of aerosol deposition fluxes to determine non-linear relationships between melt perturbation and deposition amount occurring in different months and location (thereby capturing variations in base state associated with elevation and latitude). The kernel product will include climatological-mean effects and standard deviations associated with interannual variability. Finally, the kernel will allow aerosol deposition fluxes from any global or regional aerosol model to be translated into surface melt perturbations of the GrIS, thus extending the utility of state-of-the-art aerosol models.

  1. 2D convolution kernels of ionization chambers used for photon-beam dosimetry in magnetic fields: the advantage of small over large chamber dimensions

    NASA Astrophysics Data System (ADS)

    Khee Looe, Hui; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2018-04-01

    This study aims at developing an optimization strategy for photon-beam dosimetry in magnetic fields using ionization chambers. Similar to the familiar case in the absence of a magnetic field, detectors should be selected under the criterion that their measured 2D signal profiles M(x,y) approximate the absorbed dose to water profiles D(x,y) as closely as possible. Since the conversion of D(x,y) into M(x,y) is known as the convolution with the ‘lateral dose response function’ K(x-ξ, y-η) of the detector, the ideal detector would be characterized by a vanishing magnetic field dependence of this convolution kernel (Looe et al 2017b Phys. Med. Biol. 62 5131–48). The idea of the present study is to find out, by Monte Carlo simulation of two commercial ionization chambers of different size, whether the smaller chamber dimensions would be instrumental to approach this aim. As typical examples, the lateral dose response functions in the presence and absence of a magnetic field have been Monte-Carlo modeled for the new commercial ionization chambers PTW 31021 (‘Semiflex 3D’, internal radius 2.4 mm) and PTW 31022 (‘PinPoint 3D’, internal radius 1.45 mm), which are both available with calibration factors. The Monte-Carlo model of the ionization chambers has been adjusted to account for the presence of the non-collecting part of the air volume near the guard ring. The Monte-Carlo results allow a comparison between the widths of the magnetic field dependent photon fluence response function K M(x-ξ, y-η) and of the lateral dose response function K(x-ξ, y-η) of the two chambers with the width of the dose deposition kernel K D(x-ξ, y-η). The simulated dose and chamber signal profiles show that in small photon fields and in the presence of a 1.5 T field the distortion of the chamber signal profile compared with the true dose profile is weakest for the smaller chamber. The dose responses of both chambers at large field size are shown to be altered by not more than 2% in magnetic fields up to 1.5 T for all three investigated chamber orientations.

  2. SU-F-T-672: A Novel Kernel-Based Dose Engine for KeV Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinhart, M; Fast, M F; Nill, S

    2016-06-15

    Purpose: Mimicking state-of-the-art patient radiotherapy with high precision irradiators for small animals allows advanced dose-effect studies and radiobiological investigations. One example is the implementation of pre-clinical IMRT-like irradiations, which requires the development of inverse planning for keV photon beams. As a first step, we present a novel kernel-based dose calculation engine for keV x-rays with explicit consideration of energy and material dependencies. Methods: We follow a superposition-convolution approach adapted to keV x-rays, based on previously published work on micro-beam therapy. In small animal radiotherapy, we assume local energy deposition at the photon interaction point, since the electron ranges in tissuemore » are of the same order of magnitude as the voxel size. This allows us to use photon-only kernel sets generated by MC simulations, which are pre-calculated for six energy windows and ten base materials. We validate our stand-alone dose engine against Geant4 MC simulations for various beam configurations in water, slab phantoms with bone and lung inserts, and on a mouse CT with (0.275mm)3 voxels. Results: We observe good agreement for all cases. For field sizes of 1mm{sup 2} to 1cm{sup 2} in water, the depth dose curves agree within 1% (mean), with the largest deviations in the first voxel (4%) and at depths>5cm (<2.5%). The out-of-field doses at 1cm depth agree within 8% (mean) for all but the smallest field size. In slab geometries, the mean agreement was within 3%, with maximum deviations of 8% at water-bone interfaces. The γ-index (1mm/1%) passing rate for a single-field mouse irradiation is 71%. Conclusion: The presented dose engine yields an accurate representation of keV-photon doses suitable for inverse treatment planning for IMRT. It has the potential to become a significantly faster yet sufficiently accurate alternative to full MC simulations. Further investigations will focus on energy sampling as well as calculation times. Research at ICR is also supported by Cancer Research UK under Programme C33589/A19727 and NHS funding to the NIHR Biomedical Research Centre at RMH and ICR. MFF is supported by Cancer Research UK under Programme C33589/A19908.« less

  3. SU-E-T-02: 90Y Microspheres Dosimetry Calculation with Voxel-S-Value Method: A Simple Use in the Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maneru, F; Gracia, M; Gallardo, N

    2015-06-15

    Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less

  4. SU-F-T-147: An Alternative Parameterization of Scatter Behavior Allows Significant Reduction of Beam Characterization for Pencil Beam Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van den Heuvel, F; Fiorini, F; George, B

    2016-06-15

    Purpose: 1) To describe the characteristics of pencil beam proton dose deposition kernels in a homogenous medium using a novel parameterization. 2) To propose a method utilizing this novel parametrization to reduce the measurements and pre-computation required in commissioning a pencil beam proton therapy system. Methods: Using beam data from a clinical, pencil beam proton therapy center, Monte Carlo simulations were performed to characterize the dose depositions at a range of energies from 100.32 to 226.08 MeV in 3.6MeV steps. At each energy, the beam is defined at the surface of the phantom by a two-dimensional Normal distribution. Using FLUKA,more » the in-medium dose distribution is calculated in 200×200×350 mm cube with 1 mm{sup 3} tally volumes. The calculated dose distribution in each 200×200 slice perpendicular to the beam axis is then characterized using a symmetric alpha-stable distribution centered on the beam axis. This results in two parameters, α and γ, that completely describe shape of the distribution. In addition, the total dose deposited on each slice is calculated. The alpha-stable parameters are plotted as function of the depth in-medium, providing a representation of dose deposition along the pencil beam. We observed that these graphs are isometric through a scaling of both abscissa and ordinate map the curves. Results: Using interpolation of the scaling factors of two source curves representative of different beam energies, we predicted the parameters of a third curve at an intermediate energy. The errors are quantified by the maximal difference and provide a fit better than previous methods. The maximal energy difference between the source curves generating identical curves was 21.14MeV. Conclusion: We have introduced a novel method to parameterize the in-phantom properties of pencil beam proton dose depositions. For the case of the Knoxville IBA system, no more than nine pencil beams have to be fully characterized.« less

  5. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamberlain, S; Roswell Park Cancer Institute, Buffalo, NY; French, S

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3more » × 10{sup 6} to 3 × 10{sup 7}); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10{sup 6} was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10{sup 6} have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.« less

  6. The nonuniformity of antibody distribution in the kidney and its influence on dosimetry.

    PubMed

    Flynn, Aiden A; Pedley, R Barbara; Green, Alan J; Dearling, Jason L; El-Emir, Ethaar; Boxer, Geoffrey M; Boden, Robert; Begent, Richard H J

    2003-02-01

    The therapeutic efficacy of radiolabeled antibody fragments can be limited by nephrotoxicity, particularly when the kidney is the major route of extraction from the circulation. Conventional dose estimates in kidney assume uniform dose deposition, but we have shown increased antibody localization in the cortex after glomerular filtration. The purpose of this study was to measure the radioactivity in cortex relative to medulla for a range of antibodies and to assess the validity of the assumption of uniformity of dose deposition in the whole kidney and in the cortex for these antibodies with a range of radionuclides. Storage phosphor plate technology (radioluminography) was used to acquire images of the distributions of a range of antibodies of various sizes, labeled with 125I, in kidney sections. This allowed the calculation of the antibody concentration in the cortex relative to the medulla. Beta-particle point dose kernels were then used to generate the dose-rate distributions from 14C, 131I, 186Re, 32P and 90Y. The correlation between the actual dose-rate distribution and the corresponding distribution calculated assuming uniform antibody distribution throughout the kidney was used to test the validity of estimating dose by assuming uniformity in the kidney and in the cortex. There was a strong inverse relationship between the ratio of the radioactivity in the cortex relative to that in the medulla and the antibody size. The nonuniformity of dose deposition was greatest with the smallest antibody fragments but became more uniform as the range of the emissions from the radionuclide increased. Furthermore, there was a strong correlation between the actual dose-rate distribution and the distribution when assuming a uniform source in the kidney for intact antibodies along with medium- to long-range radionuclides, but there was no correlation for small antibody fragments with any radioisotope or for short-range radionuclides with any antibody. However, when the cortex was separated from the whole kidney, the correlation between the actual dose-rate distribution and the assumed dose-rate distribution, if the source was uniform, increased significantly. During radioimmunotherapy, the extent of nonuniformity of dose deposition in the kidney depends on the properties of the antibody and radionuclide. For dosimetry estimates, the cortex should be taken as a separate source region when the radiopharmaceutical is small enough to be filtered by the glomerulus.

  7. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Tian, Z; Song, T

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less

  8. Monte Carlo calculations of energy deposition distributions of electrons below 20 keV in protein.

    PubMed

    Tan, Zhenyu; Liu, Wei

    2014-05-01

    The distributions of energy depositions of electrons in semi-infinite bulk protein and the radial dose distributions of point-isotropic mono-energetic electron sources [i.e., the so-called dose point kernel (DPK)] in protein have been systematically calculated in the energy range below 20 keV, based on Monte Carlo methods. The ranges of electrons have been evaluated by extrapolating two calculated distributions, respectively, and the evaluated ranges of electrons are compared with the electron mean path length in protein which has been calculated by using electron inelastic cross sections described in this work in the continuous-slowing-down approximation. It has been found that for a given energy, the electron mean path length is smaller than the electron range evaluated from DPK, but it is large compared to the electron range obtained from the energy deposition distributions of electrons in semi-infinite bulk protein. The energy dependences of the extrapolated electron ranges based on the two investigated distributions are given, respectively, in a power-law form. In addition, the DPK in protein has also been compared with that in liquid water. An evident difference between the two DPKs is observed. The calculations presented in this work may be useful in studies of radiation effects on proteins.

  9. Comparison of GATE/GEANT4 with EGSnrc and MCNP for electron dose calculations at energies between 15 keV and 20 MeV.

    PubMed

    Maigne, L; Perrot, Y; Schaart, D R; Donnarieix, D; Breton, V

    2011-02-07

    The GATE Monte Carlo simulation platform based on the GEANT4 toolkit has come into widespread use for simulating positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging devices. Here, we explore its use for calculating electron dose distributions in water. Mono-energetic electron dose point kernels and pencil beam kernels in water are calculated for different energies between 15 keV and 20 MeV by means of GATE 6.0, which makes use of the GEANT4 version 9.2 Standard Electromagnetic Physics Package. The results are compared to the well-validated codes EGSnrc and MCNP4C. It is shown that recent improvements made to the GEANT4/GATE software result in significantly better agreement with the other codes. We furthermore illustrate several issues of general interest to GATE and GEANT4 users who wish to perform accurate simulations involving electrons. Provided that the electron step size is sufficiently restricted, GATE 6.0 and EGSnrc dose point kernels are shown to agree to within less than 3% of the maximum dose between 50 keV and 4 MeV, while pencil beam kernels are found to agree to within less than 4% of the maximum dose between 15 keV and 20 MeV.

  10. Ultralow dose dentomaxillofacial CT imaging and iterative reconstruction techniques: variability of Hounsfield units and contrast-to-noise ratio

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Kakar, Apoorv; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2016-01-01

    Objective: The aim of this study was to evaluate whether application of ultralow dose protocols and iterative reconstruction technology (IRT) influence quantitative Hounsfield units (HUs) and contrast-to-noise ratio (CNR) in dentomaxillofacial CT imaging. Methods: A phantom with inserts of five types of materials was scanned using protocols for (a) a clinical reference for navigated surgery (CT dose index volume 36.58 mGy), (b) low-dose sinus imaging (18.28 mGy) and (c) four ultralow dose imaging (4.14, 2.63, 0.99 and 0.53 mGy). All images were reconstructed using: (i) filtered back projection (FBP); (ii) IRT: adaptive statistical iterative reconstruction-50 (ASIR-50), ASIR-100 and model-based iterative reconstruction (MBIR); and (iii) standard (std) and bone kernel. Mean HU, CNR and average HU error after recalibration were determined. Each combination of protocols was compared using Friedman analysis of variance, followed by Dunn's multiple comparison test. Results: Pearson's sample correlation coefficients were all >0.99. Ultralow dose protocols using FBP showed errors of up to 273 HU. Std kernels had less HU variability than bone kernels. MBIR reduced the error value for the lowest dose protocol to 138 HU and retained the highest relative CNR. ASIR could not demonstrate significant advantages over FBP. Conclusions: Considering a potential dose reduction as low as 1.5% of a std protocol, ultralow dose protocols and IRT should be further tested for clinical dentomaxillofacial CT imaging. Advances in knowledge: HU as a surrogate for bone density may vary significantly in CT ultralow dose imaging. However, use of std kernels and MBIR technology reduce HU error values and may retain the highest CNR. PMID:26859336

  11. A point kernel algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan

    2017-11-01

    Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.

  12. Use of convolution/superposition-based treatment planning system for dose calculations in the kilovoltage energy range

    NASA Astrophysics Data System (ADS)

    Alaei, Parham

    2000-11-01

    A number of procedures in diagnostic radiology and cardiology make use of long exposures to x rays from fluoroscopy units. Adverse effects of these long exposure times on the patients' skin have been documented in recent years. These include epilation, erythema, and, in severe cases, moist desquamation and tissue necrosis. Potential biological effects from these exposures to other organs include radiation-induced cataracts and pneumonitis. Although there have been numerous studies to measure or calculate the dose to skin from these procedures, there have only been a handful of studies to determine the dose to other organs. Therefore, there is a need for accurate methods to measure the dose in tissues and organs other than the skin. This research was concentrated in devising a method to determine accurately the radiation dose to these tissues and organs. The work was performed in several stages: First, a three dimensional (3D) treatment planning system used in radiation oncology was modified and complemented to make it usable with the low energies of x rays used in diagnostic radiology. Using the system for low energies required generation of energy deposition kernels using Monte Carlo methods. These kernels were generated using the EGS4 Monte Carlo system of codes and added to the treatment planning system. Following modification, the treatment planning system was evaluated for its accuracy of calculations in low energies within homogeneous and heterogeneous media. A study of the effects of lungs and bones on the dose distribution was also performed. The next step was the calculation of dose distributions in humanoid phantoms using this modified system. The system was used to calculate organ doses in these phantoms and the results were compared to those obtained from other methods. These dose distributions can subsequently be used to create dose-volume histograms (DVHs) for internal organs irradiated by these beams. Using this data and the concept of normal tissue complication probability (NTCP) developed for radiation oncology, the risk of future complications in a particular organ can be estimated.

  13. Dosimetric verification of radiation therapy including intensity modulated treatments, using an amorphous-silicon electronic portal imaging device

    NASA Astrophysics Data System (ADS)

    Chytyk-Praznik, Krista Joy

    Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (˜1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients' delivered radiation treatments.

  14. Absorbed dose evaluation of Auger electron-emitting radionuclides: impact of input decay spectra on dose point kernels and S-values

    NASA Astrophysics Data System (ADS)

    Falzone, Nadia; Lee, Boon Q.; Fernández-Varea, José M.; Kartsonaki, Christiana; Stuchbery, Andrew E.; Kibédi, Tibor; Vallis, Katherine A.

    2017-03-01

    The aim of this study was to investigate the impact of decay data provided by the newly developed stochastic atomic relaxation model BrIccEmis on dose point kernels (DPKs - radial dose distribution around a unit point source) and S-values (absorbed dose per unit cumulated activity) of 14 Auger electron (AE) emitting radionuclides, namely 67Ga, 80mBr, 89Zr, 90Nb, 99mTc, 111In, 117mSn, 119Sb, 123I, 124I, 125I, 135La, 195mPt and 201Tl. Radiation spectra were based on the nuclear decay data from the medical internal radiation dose (MIRD) RADTABS program and the BrIccEmis code, assuming both an isolated-atom and condensed-phase approach. DPKs were simulated with the PENELOPE Monte Carlo (MC) code using event-by-event electron and photon transport. S-values for concentric spherical cells of various sizes were derived from these DPKs using appropriate geometric reduction factors. The number of Auger and Coster-Kronig (CK) electrons and x-ray photons released per nuclear decay (yield) from MIRD-RADTABS were consistently higher than those calculated using BrIccEmis. DPKs for the electron spectra from BrIccEmis were considerably different from MIRD-RADTABS in the first few hundred nanometres from a point source where most of the Auger electrons are stopped. S-values were, however, not significantly impacted as the differences in DPKs in the sub-micrometre dimension were quickly diminished in larger dimensions. Overestimation in the total AE energy output by MIRD-RADTABS leads to higher predicted energy deposition by AE emitting radionuclides, especially in the immediate vicinity of the decaying radionuclides. This should be taken into account when MIRD-RADTABS data are used to simulate biological damage at nanoscale dimensions.

  15. A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.

    PubMed

    Bartzsch, Stefan; Oelfke, Uwe

    2013-11-01

    The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.

  16. Experimental pencil beam kernels derivation for 3D dose calculation in flattening filter free modulated fields

    NASA Astrophysics Data System (ADS)

    Diego Azcona, Juan; Barbés, Benigno; Wang, Lilie; Burguete, Javier

    2016-01-01

    This paper presents a method to obtain the pencil-beam kernels that characterize a megavoltage photon beam generated in a flattening filter free (FFF) linear accelerator (linac) by deconvolution from experimental measurements at different depths. The formalism is applied to perform independent dose calculations in modulated fields. In our previous work a formalism was developed for ideal flat fluences exiting the linac’s head. That framework could not deal with spatially varying energy fluences, so any deviation from the ideal flat fluence was treated as a perturbation. The present work addresses the necessity of implementing an exact analysis where any spatially varying fluence can be used such as those encountered in FFF beams. A major improvement introduced here is to handle the actual fluence in the deconvolution procedure. We studied the uncertainties associated to the kernel derivation with this method. Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from two linacs from different vendors, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water-equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50mm diameter circular field, collimated with a lead block. The 3D kernel for a FFF beam was obtained by deconvolution using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. Error uncertainty in the kernel derivation procedure was estimated to be within 0.2%. Eighteen modulated fields used clinically in different treatment localizations were irradiated at four measurement depths (total of fifty-four film measurements). Comparison through the gamma-index to their corresponding calculated absolute dose distributions showed a number of passing points (3%, 3mm) mostly above 99%. This new procedure is more reliable and robust than the previous one. Its ability to perform accurate independent dose calculations was demonstrated.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.

    In support of fully ceramic matrix (FCM) fuel development, coating development work has begun at the Oak Ridge National Laboratory (ORNL) to produce tri-isotropic (TRISO) coated fuel particles with UN kernels. The nitride kernels are used to increase heavy metal density in these SiC-matrix fuel pellets with details described elsewhere. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO 2 and UC x) kernels. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions were required tomore » maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels.« less

  18. Triso coating development progress for uranium nitride kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jolly, Brian C.; Lindemer, Terrence; Terrani, Kurt A.

    2015-08-01

    In support of fully ceramic matrix (FCM) fuel development [1-2], coating development work is ongoing at the Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with UN kernels [3]. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere [4]. The advanced gas reactor (AGR) program at ORNL used fluidized bed chemical vapor deposition (FBCVD) techniques for TRISO coating of UCO (two phase mixture of UO2 and UCx) kernels [5]. Similar techniques were employed for coating of the UN kernels, however significant changes in processing conditions weremore » required to maintain acceptable coating properties due to physical property and dimensional differences between the UCO and UN kernels (Table 1).« less

  19. New adaptive statistical iterative reconstruction ASiR-V: Assessment of noise performance in comparison to ASiR.

    PubMed

    De Marco, Paolo; Origgi, Daniela

    2018-03-01

    To assess the noise characteristics of the new adaptive statistical iterative reconstruction (ASiR-V) in comparison to ASiR. A water phantom was acquired with common clinical scanning parameters, at five different levels of CTDI vol . Images were reconstructed with different kernels (STD, SOFT, and BONE), different IR levels (40%, 60%, and 100%) and different slice thickness (ST) (0.625 and 2.5 mm), both for ASiR-V and ASiR. Noise properties were investigated and noise power spectrum (NPS) was evaluated. ASiR-V significantly reduced noise relative to FBP: noise reduction was in the range 23%-60% for a 0.625 mm ST and 12%-64% for the 2.5 mm ST. Above 2 mGy, noise reduction for ASiR-V had no dependence on dose. Noise reduction for ASIR-V has dependence on ST, being greater for STD and SOFT kernels at 2.5 mm. For the STD kernel ASiR-V has greater noise reduction for both ST, if compared to ASiR. For the SOFT kernel, results varies according to dose and ST, while for BONE kernel ASIR-V shows less noise reduction. NPS for CT Revolution has dose dependent behavior at lower doses. NPS for ASIR-V and ASiR is similar, showing a shift toward lower frequencies as the IR level increases for STD and SOFT kernels. The NPS is different between ASiR-V and ASIR with BONE kernel. NPS for ASiR-V appears to be ST dependent, having a shift toward lower frequencies for 2.5 mm ST. ASiR-V showed greater noise reduction than ASiR for STD and SOFT kernels, while keeping the same NPS. For the BONE kernel, ASiR-V presents a completely different behavior, with less noise reduction and modified NPS. Noise properties of the ASiR-V are dependent on reconstruction slice thickness. The noise properties of ASiR-V suggest the need for further measurements and efforts to establish new CT protocols to optimize clinical imaging. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  20. PET/MRI of Hepatic 90Y Microsphere Deposition Determines Individual Tumor Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, Kathryn J.; Maughan, Nichole M.; Laforest, Richard

    PurposeThe purpose of our study is to determine if there is a relationship between dose deposition measured by PET/MRI and individual lesion response to yttrium-90 ({sup 90}Y) microsphere radioembolization.Materials and Methods26 patients undergoing lobar treatment with {sup 90}Y microspheres underwent PET/MRI within 66 h of treatment and had follow-up imaging available. Adequate visualization of tumor was available in 24 patients, and contours were drawn on simultaneously acquired PET/MRI data. Dose volume histograms (DVHs) were extracted from dose maps, which were generated using a voxelized dose kernel. Similar contours to capture dimensional and volumetric change of tumors were drawn on follow-up imaging.more » Response was analyzed using both RECIST and volumetric RECIST (vRECIST) criteria.ResultsA total of 8 hepatocellular carcinoma (HCC), 4 neuroendocrine tumor (NET), 9 colorectal metastases (CRC) patients, and 3 patients with other metastatic disease met inclusion criteria. Average dose was useful in predicting response between responders and non-responders for all lesion types and for CRC lesions alone using both response criteria (p < 0.05). D70 (minimum dose to 70 % of volume) was also useful in predicting response when using vRECIST. No significant trend was seen in the other tumor types. For CRC lesions, an average dose of 29.8 Gy offered 76.9 % sensitivity and 75.9 % specificity for response.ConclusionsPET/MRI of {sup 90}Y microsphere distribution showed significantly higher DVH values for responders than non-responders in patients with CRC. DVH analysis of {sup 90}Y microsphere distribution following treatment may be an important predictor of response and could be used to guide future adaptive therapy trials.« less

  1. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    PubMed

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  2. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    NASA Astrophysics Data System (ADS)

    Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.

  3. Suitability of point kernel dose calculation techniques in brachytherapy treatment planning

    PubMed Central

    Lakshminarayanan, Thilagam; Subbaiah, K. V.; Thayalan, K.; Kannan, S. E.

    2010-01-01

    Brachytherapy treatment planning system (TPS) is necessary to estimate the dose to target volume and organ at risk (OAR). TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC) results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i) Board of Radiation Isotope and Technology (BRIT) low dose rate (LDR) applicator and (ii) Fletcher Green type LDR applicator (iii) Fletcher Williamson high dose rate (HDR) applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron). The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5.5% for BRIT LDR applicator, found to vary from 2.6 to 5.1% for Fletcher green type LDR applicator and are up to −4.7% for Fletcher-Williamson HDR applicator. The isodose distribution plots also show good agreements with the results of previous literatures. The isodose distributions around the shielded vaginal cylinder computed using BrachyTPS code show better agreement (less than two per cent deviation) with MC results in the unshielded region compared to shielded region, where the deviations are observed up to five per cent. The present study implies that the accurate and fast validation of complicated treatment planning calculations is possible with the point kernel code package. PMID:20589118

  4. A dose assessment method for arbitrary geometries with virtual reality in the nuclear facilities decommissioning

    NASA Astrophysics Data System (ADS)

    Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu

    2018-03-01

    During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.

  5. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    NASA Astrophysics Data System (ADS)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.

  6. Upgrade to iterative image reconstruction (IR) in MDCT imaging: a clinical study for detailed parameter optimization beyond vendor recommendations using the adaptive statistical iterative reconstruction environment (ASIR) Part2: The chest.

    PubMed

    Mueck, F G; Michael, L; Deak, Z; Scherr, M K; Maxien, D; Geyer, L L; Reiser, M; Wirth, S

    2013-07-01

    To compare the image quality in dose-reduced 64-row CT of the chest at different levels of adaptive statistical iterative reconstruction (ASIR) to full-dose baseline examinations reconstructed solely with filtered back projection (FBP) in a realistic upgrade scenario. A waiver of consent was granted by the institutional review board (IRB). The noise index (NI) relates to the standard deviation of Hounsfield units in a water phantom. Baseline exams of the chest (NI = 29; LightSpeed VCT XT, GE Healthcare) were intra-individually compared to follow-up studies on a CT with ASIR after system upgrade (NI = 45; Discovery HD750, GE Healthcare), n = 46. Images were calculated in slice and volume mode with ASIR levels of 0 - 100 % in the standard and lung kernel. Three radiologists independently compared the image quality to the corresponding full-dose baseline examinations (-2: diagnostically inferior, -1: inferior, 0: equal, + 1: superior, + 2: diagnostically superior). Statistical analysis used Wilcoxon's test, Mann-Whitney U test and the intraclass correlation coefficient (ICC). The mean CTDIvol decreased by 53 % from the FBP baseline to 8.0 ± 2.3 mGy for ASIR follow-ups; p < 0.001. The ICC was 0.70. Regarding the standard kernel, the image quality in dose-reduced studies was comparable to the baseline at ASIR 70 % in volume mode (-0.07 ± 0.29, p = 0.29). Concerning the lung kernel, every ASIR level outperformed the baseline image quality (p < 0.001), with ASIR 30 % rated best (slice: 0.70 ± 0.6, volume: 0.74 ± 0.61). Vendors' recommendation of 50 % ASIR is fair. In detail, the ASIR 70 % in volume mode for the standard kernel and ASIR 30 % for the lung kernel performed best, allowing for a dose reduction of approximately 50 %. © Georg Thieme Verlag KG Stuttgart · New York.

  7. MO-G-17A-06: Kernel Based Dosimetry for 90Y Microsphere Liver Therapy Using 90Y Bremsstrahlung SPECT/CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikell, J; Siman, W; Kappadath, S

    2014-06-15

    Purpose: 90Y microsphere therapy in liver presents a situation where beta transport is dominant and the tissue is relatively homogenous. We compare voxel-based absorbed doses from a 90Y kernel to Monte Carlo (MC) using quantitative 90Y bremsstrahlung SPECT/CT as source distribution. Methods: Liver, normal liver, and tumors were delineated by an interventional radiologist using contrast-enhanced CT registered with 90Y SPECT/CT scans for 14 therapies. Right lung was segmented via region growing. The kernel was generated with 1.04 g/cc soft tissue for 4.8 mm voxel matching the SPECT. MC simulation materials included air, lung, soft tissue, and bone with varying densities.more » We report percent difference between kernel and MC (%Δ(K,MC)) for mean absorbed dose, D70, and V20Gy in total liver, normal liver, tumors, and right lung. We also report %Δ(K,MC) for heterogeneity metrics: coefficient of variation (COV) and D10/D90. The impact of spatial resolution (0, 10, 20 mm FWHM) and lung shunt fraction (LSF) (1,5,10,20%) on the accuracy of MC and kernel doses near the liver-lung interface was modeled in 1D. We report the distance from the interface where errors become <10% of unblurred MC as d10(side of interface, dose calculation, FWHM blurring, LSF). Results: The %Δ(K,MC) for mean, D70, and V20Gy in tumor and liver was <7% while right lung differences varied from 60–90%. The %Δ(K,MC) for COV was <4.8% for tumor and liver and <54% for the right lung. The %Δ(K,MC) for D10/D90 was <5% for 22/23 tumors. d10(liver,MC,10,1–20) awere <9mm and d10(liver,MC,20,1–20) awere <15mm; both agreed within 3mm to the kernel. d10(lung,MC,10,20), d10(lung,MC,10,1), d10(lung,MC,20,20), and d10(lung,MC,20,1) awere 6, 25, 15, and 34mm, respectively. Kernel calculations on blurred distributions in lung had errors > 10%. Conclusions: Liver and tumor voxel doses with 90Y kernel and MC agree within 7%. Large differences exist between the two methods in right lung. Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA138986. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  8. The effects of food irradiation on quality of pine nut kernels

    NASA Astrophysics Data System (ADS)

    Gölge, Evren; Ova, Gülden

    2008-03-01

    Pine nuts ( Pinus pinae) undergo gamma irradiation process with the doses 0.5, 1.0, 3.0, and 5.0 kGy. The changes in chemical, physical and sensory attributes were observed in the following 3 months of storage period. The data obtained from the experiments showed the peroxide values of the pine nut kernels increased proportionally to the dose. On contrary, irradiation process has no effect on the physical quality such as texture and color, fatty acid composition and sensory attributes.

  9. SU-G-TeP3-13: The Role of Nanoscale Energy Deposition in the Development of Gold Nanoparticle-Enhanced Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkby, C; The University of Calgary, Calgary, AB; Koger, B

    2016-06-15

    Purpose: Gold nanoparticles (GNPs) can enhance radiotherapy effects. The high photoelectric cross section of gold relative to tissue, particularly at lower energies, leads to localized dose enhancement. However in a clinical context, photon energies must also be sufficient to reach a target volume at a given depth. These properties must be balanced to optimize such a therapy. Given that nanoscale energy deposition patterns around GNPs play a role in determining biological outcomes, in this work we seek to establish their role in this optimization process. Methods: The PENELOPE Monte Carlo code was used to generate spherical dose deposition kernels inmore » 1000 nm diameter spheres around 50 nm diameter GNPs in response to monoenergetic photons incident on the GNP. Induced “lesions” were estimated by either a local effect model (LEM) or a mean dose model (MDM). The ratio of these estimates was examined for a range of photon energies (10 keV to 2 MeV), for three sets of linear-quadratic parameters. Results: The models produce distinct differences in expected lesion values, the lower the alpha-beta ratio, the greater the difference. The ratio of expected lesion values remained constant within 5% for energies of 40 keV and above across all parameter sets and rose to a difference of 35% for lower energies only for the lowest alpha-beta ratio. Conclusion: Consistent with other work, these calculations suggest nanoscale energy deposition patterns matter in predicting biological response to GNP-enhanced radiotherapy. However the ratio of expected lesions between the different models is largely independent of energy, indicating that GNP-enhanced radiotherapy scenarios can be optimized in photon energy without consideration of the nanoscale patterns. Special attention may be warranted for energies of 20 keV or below and low alpha-beta ratios.« less

  10. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results:more » A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.« less

  11. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  12. Modulation of antioxidant potential in liver of mice by kernel oil of cashew nut (Anacardium occidentale) and its lack of tumour promoting ability in DMBA induced skin papillomagenesis.

    PubMed

    Singh, Bimala; Kale, R K; Rao, A R

    2004-04-01

    Cashew nut shell oil has been reported to possess tumour promoting property. Therefore an attempt has been made to study the modulatory effect of cashew nut (Anlacardium occidentale) kernel oil on antioxidant potential in liver of Swiss albino mice and also to see whether it has tumour promoting ability like the shell oil. The animals were treated orally with two doses (50 and 100 microl/animal/day) of kernel oil of cashew nut for 10 days. The kernel oil was found to enhance the specific activities of SOD, catalase, GST, methylglyoxalase I and levels of GSH. These results suggested that cashew nut kernel oil had an ability to increase the antioxidant status of animals. The decreased level of lipid peroxidation supported this possibility. The tumour promoting property of the kernel oil was also examined and found that cashew nut kernel oil did not exhibit any solitary carcinogenic activity.

  13. Fred: a GPU-accelerated fast-Monte Carlo code for rapid treatment plan recalculation in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Schiavi, A.; Senzacqua, M.; Pioli, S.; Mairani, A.; Magro, G.; Molinelli, S.; Ciocca, M.; Battistoni, G.; Patera, V.

    2017-09-01

    Ion beam therapy is a rapidly growing technique for tumor radiation therapy. Ions allow for a high dose deposition in the tumor region, while sparing the surrounding healthy tissue. For this reason, the highest possible accuracy in the calculation of dose and its spatial distribution is required in treatment planning. On one hand, commonly used treatment planning software solutions adopt a simplified beam-body interaction model by remapping pre-calculated dose distributions into a 3D water-equivalent representation of the patient morphology. On the other hand, Monte Carlo (MC) simulations, which explicitly take into account all the details in the interaction of particles with human tissues, are considered to be the most reliable tool to address the complexity of mixed field irradiation in a heterogeneous environment. However, full MC calculations are not routinely used in clinical practice because they typically demand substantial computational resources. Therefore MC simulations are usually only used to check treatment plans for a restricted number of difficult cases. The advent of general-purpose programming GPU cards prompted the development of trimmed-down MC-based dose engines which can significantly reduce the time needed to recalculate a treatment plan with respect to standard MC codes in CPU hardware. In this work, we report on the development of fred, a new MC simulation platform for treatment planning in ion beam therapy. The code can transport particles through a 3D voxel grid using a class II MC algorithm. Both primary and secondary particles are tracked and their energy deposition is scored along the trajectory. Effective models for particle-medium interaction have been implemented, balancing accuracy in dose deposition with computational cost. Currently, the most refined module is the transport of proton beams in water: single pencil beam dose-depth distributions obtained with fred agree with those produced by standard MC codes within 1-2% of the Bragg peak in the therapeutic energy range. A comparison with measurements taken at the CNAO treatment center shows that the lateral dose tails are reproduced within 2% in the field size factor test up to 20 cm. The tracing kernel can run on GPU hardware, achieving 10 million primary s-1 on a single card. This performance allows one to recalculate a proton treatment plan at 1% of the total particles in just a few minutes.

  14. SU-F-J-133: Adaptive Radiation Therapy with a Four-Dimensional Dose Calculation Algorithm That Optimizes Dose Distribution Considering Breathing Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Algan, O; Ahmad, S

    Purpose: To model patient motion and produce four-dimensional (4D) optimized dose distributions that consider motion-artifacts in the dose calculation during the treatment planning process. Methods: An algorithm for dose calculation is developed where patient motion is considered in dose calculation at the stage of the treatment planning. First, optimal dose distributions are calculated for the stationary target volume where the dose distributions are optimized considering intensity-modulated radiation therapy (IMRT). Second, a convolution-kernel is produced from the best-fitting curve which matches the motion trajectory of the patient. Third, the motion kernel is deconvolved with the initial dose distribution optimized for themore » stationary target to produce a dose distribution that is optimized in four-dimensions. This algorithm is tested with measured doses using a mobile phantom that moves with controlled motion patterns. Results: A motion-optimized dose distribution is obtained from the initial dose distribution of the stationary target by deconvolution with the motion-kernel of the mobile target. This motion-optimized dose distribution is equivalent to that optimized for the stationary target using IMRT. The motion-optimized and measured dose distributions are tested with the gamma index with a passing rate of >95% considering 3% dose-difference and 3mm distance-to-agreement. If the dose delivery per beam takes place over several respiratory cycles, then the spread-out of the dose distributions is only dependent on the motion amplitude and not affected by motion frequency and phase. This algorithm is limited to motion amplitudes that are smaller than the length of the target along the direction of motion. Conclusion: An algorithm is developed to optimize dose in 4D. Besides IMRT that provides optimal dose coverage for a stationary target, it extends dose optimization to 4D considering target motion. This algorithm provides alternative to motion management techniques such as beam-gating or breath-holding and has potential applications in adaptive radiation therapy.« less

  15. Patient-specific Monte Carlo-based dose-kernel approach for inverse planning in afterloading brachytherapy.

    PubMed

    D'Amours, Michel; Pouliot, Jean; Dagnault, Anne; Verhaegen, Frank; Beaulieu, Luc

    2011-12-01

    Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. MO-FG-CAMPUS-TeP1-05: Rapid and Efficient 3D Dosimetry for End-To-End Patient-Specific QA of Rotational SBRT Deliveries Using a High-Resolution EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Han, B; Xing, L

    2016-06-15

    Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less

  17. Metabolites Identified during Varied Doses of Aspergillus Species in Zea mays Grains, and Their Correlation with Aflatoxin Levels

    PubMed Central

    Chrysanthopoulos, Panagiotis K.; Hodson, Mark P.; Darnell, Ross; Korie, Sam

    2018-01-01

    Aflatoxin contamination is associated with the development of aflatoxigenic fungi such as Aspergillus flavus and A. parasiticus on food grains. This study was aimed at investigating metabolites produced during fungal development on maize and their correlation with aflatoxin levels. Maize cobs were harvested at R3 (milk), R4 (dough), and R5 (dent) stages of maturity. Individual kernels were inoculated in petri dishes with four doses of fungal spores. Fungal colonisation, metabolite profile, and aflatoxin levels were examined. Grain colonisation decreased with kernel maturity: milk-, dough-, and dent-stage kernels by approximately 100%, 60%, and 30% respectively. Aflatoxin levels increased with dose at dough and dent stages. Polar metabolites including alanine, proline, serine, valine, inositol, iso-leucine, sucrose, fructose, trehalose, turanose, mannitol, glycerol, arabitol, inositol, myo-inositol, and some intermediates of the tricarboxylic acid cycle (TCA—also known as citric acid or Krebs cycle) were important for dose classification. Important non-polar metabolites included arachidic, palmitic, stearic, 3,4-xylylic, and margaric acids. Aflatoxin levels correlated with levels of several polar metabolites. The strongest positive and negative correlations were with arabitol (R = 0.48) and turanose and (R = −0.53), respectively. Several metabolites were interconnected with the TCA; interconnections of the metabolites with the TCA cycle varied depending upon the grain maturity. PMID:29735944

  18. Metabolites Identified during Varied Doses of Aspergillus Species in Zea mays Grains, and Their Correlation with Aflatoxin Levels.

    PubMed

    Falade, Titilayo D O; Chrysanthopoulos, Panagiotis K; Hodson, Mark P; Sultanbawa, Yasmina; Fletcher, Mary; Darnell, Ross; Korie, Sam; Fox, Glen

    2018-05-07

    Aflatoxin contamination is associated with the development of aflatoxigenic fungi such as Aspergillus flavus and A. parasiticus on food grains. This study was aimed at investigating metabolites produced during fungal development on maize and their correlation with aflatoxin levels. Maize cobs were harvested at R3 (milk), R4 (dough), and R5 (dent) stages of maturity. Individual kernels were inoculated in petri dishes with four doses of fungal spores. Fungal colonisation, metabolite profile, and aflatoxin levels were examined. Grain colonisation decreased with kernel maturity: milk-, dough-, and dent-stage kernels by approximately 100%, 60%, and 30% respectively. Aflatoxin levels increased with dose at dough and dent stages. Polar metabolites including alanine, proline, serine, valine, inositol, iso-leucine, sucrose, fructose, trehalose, turanose, mannitol, glycerol, arabitol, inositol, myo-inositol, and some intermediates of the tricarboxylic acid cycle (TCA—also known as citric acid or Krebs cycle) were important for dose classification. Important non-polar metabolites included arachidic, palmitic, stearic, 3,4-xylylic, and margaric acids. Aflatoxin levels correlated with levels of several polar metabolites. The strongest positive and negative correlations were with arabitol ( R = 0.48) and turanose and ( R = −0.53), respectively. Several metabolites were interconnected with the TCA; interconnections of the metabolites with the TCA cycle varied depending upon the grain maturity.

  19. Validity of linear measurements of the jaws using ultralow-dose MDCT and the iterative techniques of ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig

    2016-10-01

    To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.

  20. An analytical dose-averaged LET calculation algorithm considering the off-axis LET enhancement by secondary protons for spot-scanning proton therapy.

    PubMed

    Hirayama, Shusuke; Matsuura, Taeko; Ueda, Hideaki; Fujii, Yusuke; Fujii, Takaaki; Takao, Seishin; Miyamoto, Naoki; Shimizu, Shinichi; Fujimoto, Rintaro; Umegaki, Kikuo; Shirato, Hiroki

    2018-05-22

    To evaluate the biological effects of proton beams as part of daily clinical routine, fast and accurate calculation of dose-averaged linear energy transfer (LET d ) is required. In this study, we have developed the analytical LET d calculation method based on the pencil-beam algorithm (PBA) considering the off-axis enhancement by secondary protons. This algorithm (PBA-dLET) was then validated using Monte Carlo simulation (MCS) results. In PBA-dLET, LET values were assigned separately for each individual dose kernel based on the PBA. For the dose kernel, we employed a triple Gaussian model which consists of the primary component (protons that undergo the multiple Coulomb scattering) and the halo component (protons that undergo inelastic, nonelastic and elastic nuclear reaction); the primary and halo components were represented by a single Gaussian and the sum of two Gaussian distributions, respectively. Although the previous analytical approaches assumed a constant LET d value for the lateral distribution of a pencil beam, the actual LET d increases away from the beam axis, because there are more scattered and therefore lower energy protons with higher stopping powers. To reflect this LET d behavior, we have assumed that the LETs of primary and halo components can take different values (LET p and LET halo ), which vary only along the depth direction. The values of dual-LET kernels were determined such that the PBA-dLET reproduced the MCS-generated LET d distribution in both small and large fields. These values were generated at intervals of 1 mm in depth for 96 energies from 70.2 to 220 MeV and collected in the look-up table. Finally, we compared the LET d distributions and mean LET d (LET d,mean ) values of targets and organs at risk between PBA-dLET and MCS. Both homogeneous phantom and patient geometries (prostate, liver, and lung cases) were used to validate the present method. In the homogeneous phantom, the LET d profiles obtained by the dual-LET kernels agree well with the MCS results except for the low-dose region in the lateral penumbra, where the actual dose was below 10% of the maximum dose. In the patient geometry, the LET d profiles calculated with the developed method reproduces MCS with the similar accuracy as in the homogeneous phantom. The maximum differences in LET d,mean for each structure between the PBA-dLET and the MCS were 0.06 keV/μm in homogeneous phantoms and 0.08 keV/μm in patient geometries under all tested conditions, respectively. We confirmed that the dual-LET-kernel model well reproduced the MCS, not only in the homogeneous phantom but also in complex patient geometries. The accuracy of the LET d was largely improved from the single-LET-kernel model, especially at the lateral penumbra. The model is expected to be useful, especially for proper recognition of the risk of side effects when the target is next to critical organs. © 2018 American Association of Physicists in Medicine.

  1. Safety Testing of AGR-2 UCO Compacts 5-2-2, 2-2-2, and 5-4-1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunn, John D.; Morris, Robert Noel; Baldwin, Charles A.

    2016-08-01

    Post-irradiation examination (PIE) is being performed on tristructural-isotropic (TRISO) coated-particle fuel compacts from the Advanced Gas Reactor (AGR) Fuel Development and Qualification Program second irradiation experiment (AGR-2). This effort builds upon the understanding acquired throughout the AGR-1 PIE campaign, and is establishing a database for the different AGR-2 fuel designs. The AGR-2 irradiation experiment included TRISO fuel particles coated at BWX Technologies (BWXT) with a 150-mm-diameter engineering-scale coater. Two coating batches were tested in the AGR-2 irradiation experiment. Batch 93085 had 508-μm-diameter uranium dioxide (UO 2) kernels. Batch 93073 had 427-μm-diameter UCO kernels, which is a kernel design where somemore » of the uranium oxide is converted to uranium carbide during fabrication to provide a getter for oxygen liberated during fission and limit CO production. Fabrication and property data for the AGR-2 coating batches have been compiled and compared to those for AGR-1. The AGR-2 TRISO coatings were most like the AGR-1 Variant 3 TRISO deposited in the 50-mm-diameter ORNL lab-scale coater. In both cases argon-dilution of the hydrogen and methyltrichlorosilane coating gas mixture employed to deposit the SiC was used to produce a finer-grain, more equiaxed SiC microstructure. In addition to the fact that AGR-1 fuel had smaller, 350-μm-diameter UCO kernels, notable differences in the TRISO particle properties included the pyrocarbon anisotropy, which was slightly higher in the particles coated in the engineering-scale coater, and the exposed kernel defect fraction, which was higher for AGR-2 fuel due to the detected presence of particles with impact damage introduced during TRISO particle handling.« less

  2. Application of SWIR hyperspectral imaging and chemometrics for identification of aflatoxin B1 contaminated maize kernels

    NASA Astrophysics Data System (ADS)

    Kimuli, Daniel; Wang, Wei; Wang, Wei; Jiang, Hongzhe; Zhao, Xin; Chu, Xuan

    2018-03-01

    A short-wave infrared (SWIR) hyperspectral imaging system (1000-2500 nm) combined with chemometric data analysis was used to detect aflatoxin B1 (AFB1) on surfaces of 600 kernels of four yellow maize varieties from different States of the USA (Georgia, Illinois, Indiana and Nebraska). For each variety, four AFB1 solutions (10, 20, 100 and 500 ppb) were artificially deposited on kernels and a control group was generated from kernels treated with methanol solution. Principal component analysis (PCA), partial least squares discriminant analysis (PLSDA) and factorial discriminant analysis (FDA) were applied to explore and classify maize kernels according to AFB1 contamination. PCA results revealed partial separation of control kernels from AFB1 contaminated kernels for each variety while no pattern of separation was observed among pooled samples. A combination of standard normal variate and first derivative pre-treatments produced the best PLSDA classification model with accuracy of 100% and 96% in calibration and validation, respectively, from Illinois variety. The best AFB1 classification results came from FDA on raw spectra with accuracy of 100% in calibration and validation for Illinois and Nebraska varieties. However, for both PLSDA and FDA models, poor AFB1 classification results were obtained for pooled samples relative to individual varieties. SWIR spectra combined with chemometrics and spectra pre-treatments showed the possibility of detecting maize kernels of different varieties coated with AFB1. The study further suggests that increase of maize kernel constituents like water, protein, starch and lipid in a pooled sample may have influence on detection accuracy of AFB1 contamination.

  3. Evaluation of the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels using particle and heavy ion transport code system: PHITS.

    PubMed

    Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko

    2017-10-01

    We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Dose calculation algorithm of fast fine-heterogeneity correction for heavy charged particle radiotherapy.

    PubMed

    Kanematsu, Nobuyuki

    2011-04-01

    This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Lung nodule detection by microdose CT versus chest radiography (standard and dual-energy subtracted).

    PubMed

    Ebner, Lukas; Bütikofer, Yanik; Ott, Daniel; Huber, Adrian; Landau, Julia; Roos, Justus E; Heverhagen, Johannes T; Christe, Andreas

    2015-04-01

    The purpose of this study was to investigate the feasibility of microdose CT using a comparable dose as for conventional chest radiographs in two planes including dual-energy subtraction for lung nodule assessment. We investigated 65 chest phantoms with 141 lung nodules, using an anthropomorphic chest phantom with artificial lung nodules. Microdose CT parameters were 80 kV and 6 mAs, with pitch of 2.2. Iterative reconstruction algorithms and an integrated circuit detector system (Stellar, Siemens Healthcare) were applied for maximum dose reduction. Maximum intensity projections (MIPs) were reconstructed. Chest radiographs were acquired in two projections with bone suppression. Four blinded radiologists interpreted the images in random order. A soft-tissue CT kernel (I30f) delivered better sensitivities in a pilot study than a hard kernel (I70f), with respective mean (SD) sensitivities of 91.1%±2.2% versus 85.6%±5.6% (p=0.041). Nodule size was measured accurately for all kernels. Mean clustered nodule sensitivity with chest radiography was 45.7%±8.1% (with bone suppression, 46.1%±8%; p=0.94); for microdose CT, nodule sensitivity was 83.6%±9% without MIP (with additional MIP, 92.5%±6%; p<10(-3)). Individual sensitivities of microdose CT for readers 1, 2, 3, and 4 were 84.3%, 90.7%, 68.6%, and 45.0%, respectively. Sensitivities with chest radiography for readers 1, 2, 3, and 4 were 42.9%, 58.6%, 36.4%, and 90.7%, respectively. In the per-phantom analysis, respective sensitivities of microdose CT versus chest radiography were 96.2% and 75% (p<10(-6)). The effective dose for chest radiography including dual-energy subtraction was 0.242 mSv; for microdose CT, the applied dose was 0.1323 mSv. Microdose CT is better than the combination of chest radiography and dual-energy subtraction for the detection of solid nodules between 5 and 12 mm at a lower dose level of 0.13 mSv. Soft-tissue kernels allow better sensitivities. These preliminary results indicate that microdose CT has the potential to replace conventional chest radiography for lung nodule detection.

  6. Quantitative estimation of the energy flux during an explosive chromospheric evaporation in a white light flare kernel observed by Hinode, IRIS, SDO, and RHESSI

    NASA Astrophysics Data System (ADS)

    Lee, Kyoung-Sun; Imada, Shinsuke; Kyoko, Watanabe; Bamba, Yumi; Brooks, David H.

    2016-10-01

    An X1.6 flare occurred at the AR 12192 on 2014 October 22 at14:02 UT was observed by Hinode, IRIS, SDO, and RHESSI. We analyze a bright kernel which produces a white light (WL) flare with continuum enhancement and a hard X-ray (HXR) peak. Taking advantage of the spectroscopic observations of IRIS and Hinode/EIS, we measure the temporal variation of the plasma properties in the bright kernel in the chromosphere and corona. We found that explosive evaporation was observed when the WL emission occurred, even though the intensity enhancement in hotter lines is quite weak. The temporal correlation of the WL emission, HXR peak, and evaporation flows indicate that the WL emission was produced by accelerated electrons. To understand the white light emission processes, we calculated the deposited energy flux from the non-thermal electrons observed by RHESSI and compared it to the dissipated energy estimated from the chromospheric line (Mg II triplet) observed by IRIS. The deposited energy flux from the non-thermal electrons is about 3.1 × 1010erg cm-2 s-1 when we consider a cut-off energy 20 keV. The estimated energy flux from the temperature changes in the chromosphere measured from the Mg II subordinate line is about 4.6-6.7×109erg cm-2 s-1, ˜ 15-22% of the deposited energy. By comparison of these estimated energy fluxes we conclude that the continuum enhancement was directly produced by the non-thermal electrons.

  7. Dosimetry applications in GATE Monte Carlo toolkit.

    PubMed

    Papadimitroulas, Panagiotis

    2017-09-01

    Monte Carlo (MC) simulations are a well-established method for studying physical processes in medical physics. The purpose of this review is to present GATE dosimetry applications on diagnostic and therapeutic simulated protocols. There is a significant need for accurate quantification of the absorbed dose in several specific applications such as preclinical and pediatric studies. GATE is an open-source MC toolkit for simulating imaging, radiotherapy (RT) and dosimetry applications in a user-friendly environment, which is well validated and widely accepted by the scientific community. In RT applications, during treatment planning, it is essential to accurately assess the deposited energy and the absorbed dose per tissue/organ of interest, as well as the local statistical uncertainty. Several types of realistic dosimetric applications are described including: molecular imaging, radio-immunotherapy, radiotherapy and brachytherapy. GATE has been efficiently used in several applications, such as Dose Point Kernels, S-values, Brachytherapy parameters, and has been compared against various MC codes which are considered as standard tools for decades. Furthermore, the presented studies show reliable modeling of particle beams when comparing experimental with simulated data. Examples of different dosimetric protocols are reported for individualized dosimetry and simulations combining imaging and therapy dose monitoring, with the use of modern computational phantoms. Personalization of medical protocols can be achieved by combining GATE MC simulations with anthropomorphic computational models and clinical anatomical data. This is a review study, covering several dosimetric applications of GATE, and the different tools used for modeling realistic clinical acquisitions with accurate dose assessment. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Development, survival and fitness performance of Helicoverpa zea (Lepidoptera: Noctuidae) in MON810 Bt field corn.

    PubMed

    Horner, T A; Dively, G P; Herbert, D A

    2003-06-01

    Helicoverpa zea (Boddie) development, survival, and feeding injury in MON810 transgenic ears of field corn (Zea mays L.) expressing Bacillus thuringiensis variety kurstaki (Bt) Cry1Ab endotoxins were compared with non-Bt ears at four geographic locations over two growing seasons. Expression of Cry1Ab endotoxin resulted in overall reductions in the percentage of damaged ears by 33% and in the amount of kernels consumed by 60%. Bt-induced effects varied significantly among locations, partly because of the overall level and timing of H. zea infestations, condition of silk tissue at the time of egg hatch, and the possible effects of plant stress. Larvae feeding on Bt ears produced scattered, discontinuous patches of partially consumed kernels, which were arranged more linearly than the compact feeding patterns in non-Bt ears. The feeding patterns suggest that larvae in Bt ears are moving about sampling kernels more frequently than larvae in non-Bt ears. Because not all kernels express the same level of endotoxin, the spatial heterogeneity of toxin distribution within Bt ears may provide an opportunity for development of behavioral responses in H. zea to avoid toxin. MON810 corn suppressed the establishment and development of H. zea to late instars by at least 75%. This level of control is considered a moderate dose, which may increase the risk of resistance development in areas where MON810 corn is widely adopted and H. zea overwinters successfully. Sublethal effects of MON810 corn resulted in prolonged larval and prepupal development, smaller pupae, and reduced fecundity of H. zea. The moderate dose effects and the spatial heterogeneity of toxin distribution among kernels could increase the additive genetic variance for both physiological and behavioral resistance in H. zea populations. Implications of localized population suppression are discussed.

  9. Testing of the analytical anisotropic algorithm for photon dose calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esch, Ann van; Tillikainen, Laura; Pyykkonen, Jukka

    2006-11-15

    The analytical anisotropic algorithm (AAA) was implemented in the Eclipse (Varian Medical Systems) treatment planning system to replace the single pencil beam (SPB) algorithm for the calculation of dose distributions for photon beams. AAA was developed to improve the dose calculation accuracy, especially in heterogeneous media. The total dose deposition is calculated as the superposition of the dose deposited by two photon sources (primary and secondary) and by an electron contamination source. The photon dose is calculated as a three-dimensional convolution of Monte-Carlo precalculated scatter kernels, scaled according to the electron density matrix. For the configuration of AAA, an optimizationmore » algorithm determines the parameters characterizing the multiple source model by optimizing the agreement between the calculated and measured depth dose curves and profiles for the basic beam data. We have combined the acceptance tests obtained in three different departments for 6, 15, and 18 MV photon beams. The accuracy of AAA was tested for different field sizes (symmetric and asymmetric) for open fields, wedged fields, and static and dynamic multileaf collimation fields. Depth dose behavior at different source-to-phantom distances was investigated. Measurements were performed on homogeneous, water equivalent phantoms, on simple phantoms containing cork inhomogeneities, and on the thorax of an anthropomorphic phantom. Comparisons were made among measurements, AAA, and SPB calculations. The optimization procedure for the configuration of the algorithm was successful in reproducing the basic beam data with an overall accuracy of 3%, 1 mm in the build-up region, and 1%, 1 mm elsewhere. Testing of the algorithm in more clinical setups showed comparable results for depth dose curves, profiles, and monitor units of symmetric open and wedged beams below d{sub max}. The electron contamination model was found to be suboptimal to model the dose around d{sub max}, especially for physical wedges at smaller source to phantom distances. For the asymmetric field verification, absolute dose difference of up to 4% were observed for the most extreme asymmetries. Compared to the SPB, the penumbra modeling is considerably improved (1%, 1 mm). At the interface between solid water and cork, profiles show a better agreement with AAA. Depth dose curves in the cork are substantially better with AAA than with SPB. Improvements are more pronounced for 18 MV than for 6 MV. Point dose measurements in the thoracic phantom are mostly within 5%. In general, we can conclude that, compared to SPB, AAA improves the accuracy of dose calculations. Particular progress was made with respect to the penumbra and low dose regions. In heterogeneous materials, improvements are substantial and more pronounced for high (18 MV) than for low (6 MV) energies.« less

  10. SU-G-206-15: Effects of Dose Reduction On Emphysema Score

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, P; Wahi-Anwar, M; Kim, H

    Purpose: The purpose of this study was to investigate the effects of reducing radiation dose levels on emphysema scores from lung cancer screening CT exams. Methods: 52 cases were selected from the National Lung Screening Trial (NLST) patients for which we had both the image series and the raw CT data. All scans were acquired with fixed effective mAs (25 for standard-sized patients, 40 for large patients) on a 64-slice scanner (Sensation 64, Siemens Healthcare) using 120kV, 64×0.6mm collimation and pitch 1.0. All images were reconstructed with 1mm slice thickness, B50 kernel. Based on a previously-published technique, we added noisemore » to the raw data to simulate reduced-dose versions at 50% and 25% of the original dose (approximately 1.0- and 0.5-mGy CTDIvol). Lung segmentations were obtained via region growing from manual seed point at a threshold of 600HU followed by manual removal of trachea and major airways. Lung segmentations were only performed on original dose scans, and mapped to simulated reduced-dose scans. Emphysema scores based on relative area of lung with attenuation values lower than −950HU (RA950) were computed for all cases. Results: Average RA950 of all 50 cases were 31.6 (±5.5), 32.5 (±4.9) and 32.8 (±4.6) for 100%, 50% and 25% dose level respectively. The average absolute difference in RA950 between simulated and original dose scans were 1.0 (±0.7) and 1.4 (±1.1) for 50% and 25% dose level respectively. Conclusion: RA950 is relatively robust to dose level, with a difference of no more than 5 from the original dose scans. The average RA950 of this population was high for a two reasons: This was a high risk population of patients with substantial smoking history; The use of B50 kernel, which may be biased towards high emphysema scores. Further exploration with smoother kernels will be conducted in the future. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH grant support from U01 CA181156.« less

  11. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques, Robert; Wong, John; Taylor, Russell

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less

  12. Directional interstitial brachytherapy from simulation to application

    NASA Astrophysics Data System (ADS)

    Lin, Liyong

    Organs at risk (OAR) are sometimes adjacent to or embedded in or overlap with the clinical target volume (CTV) to be treated. The purpose of this PhD study is to develop directionally low energy gamma-emitting interstitial brachytherapy sources. These sources can be applied between OAR to selectively reduce hot spots in the OARs and normal tissues. The reduction of dose over undesired regions can expand patient eligibility or reduce toxicities for the treatment by conventional interstitial brachytherapy. This study covers the development of a directional source from design optimization to construction of the first prototype source. The Monte Carlo code MCNP was used to simulate the radiation transport for the designs of directional sources. We have made a special construction kit to assemble radioactive and gold-shield components precisely into D-shaped titanium containers of the first directional source. Directional sources have a similar dose distribution as conventional sources on the treated side but greatly reduced dose on the shielded side, with a sharp dose gradient between them. A three-dimensional dose deposition kernel for the 125I directional source has been calculated. Treatment plans can use both directional and conventional 125I sources at the same source strength for low-dose-rate (LDR) implants to optimize the dose distributions. For prostate tumors, directional 125I LDR brachytherapy can potentially reduce genitourinary and gastrointestinal toxicities and improve potency preservation for low risk patients. The combination of better dose distribution of directional implants and better therapeutic ratio between tumor response and late reactions enables a novel temporary LDR treatment, as opposed to permanent or high-dose-rate (HDR) brachytherapy for the intermediate risk T2b and high risk T2c tumors. Supplemental external-beam treatments can be shortened with a better brachytherapy boost for T3 tumors. In conclusion, we have successfully finished the design optimization and construction of the first prototype directional source. Potential clinical applications and potential benefits of directional sources have been shown for prostate and breast tumors.

  13. Calculation of electron Dose Point Kernel in water with GEANT4 for medical application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guimaraes, C. C.; Sene, F. F.; Martinelli, J. R.

    2009-06-03

    The rapid insertion of new technologies in medical physics in the last years, especially in nuclear medicine, has been followed by a great development of faster Monte Carlo algorithms. GEANT4 is a Monte Carlo toolkit that contains the tools to simulate the problems of particle transport through matter. In this work, GEANT4 was used to calculate the dose-point-kernel (DPK) for monoenergetic electrons in water, which is an important reference medium for nuclear medicine. The three different physical models of electromagnetic interactions provided by GEANT4 - Low Energy, Penelope and Standard - were employed. To verify the adequacy of these models,more » the results were compared with references from the literature. For all energies and physical models, the agreement between calculated DPKs and reported values is satisfactory.« less

  14. An efficient method to determine double Gaussian fluence parameters in the eclipse™ proton pencil beam model.

    PubMed

    Shen, Jiajian; Liu, Wei; Stoker, Joshua; Ding, Xiaoning; Anand, Aman; Hu, Yanle; Herman, Michael G; Bues, Martin

    2016-12-01

    To find an efficient method to configure the proton fluence for a commercial proton pencil beam scanning (PBS) treatment planning system (TPS). An in-water dose kernel was developed to mimic the dose kernel of the pencil beam convolution superposition algorithm, which is part of the commercial proton beam therapy planning software, eclipse™ (Varian Medical Systems, Palo Alto, CA). The field size factor (FSF) was calculated based on the spot profile reconstructed by the in-house dose kernel. The workflow of using FSFs to find the desirable proton fluence is presented. The in-house derived spot profile and FSF were validated by a direct comparison with those calculated by the eclipse TPS. The validation included 420 comparisons of the FSFs from 14 proton energies, various field sizes from 2 to 20 cm and various depths from 20% to 80% of proton range. The relative in-water lateral profiles between the in-house calculation and the eclipse TPS agree very well even at the level of 10 -4 . The FSFs between the in-house calculation and the eclipse TPS also agree well. The maximum deviation is within 0.5%, and the standard deviation is less than 0.1%. The authors' method significantly reduced the time to find the desirable proton fluences of the clinical energies. The method is extensively validated and can be applied to any proton centers using PBS and the eclipse TPS.

  15. Neutron dose rate analysis on HTGR-10 reactor using Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Suwoto; Adrial, H.; Hamzah, A.; Zuhair; Bakhri, S.; Sunaryo, G. R.

    2018-02-01

    The HTGR-10 reactor is cylinder-shaped core fuelled with kernel TRISO coated fuel particles in the spherical pebble with helium cooling system. The outlet helium gas coolant temperature outputted from the reactor core is designed to 700 °C. One advantage HTGR type reactor is capable of co-generation, as an addition to generating electricity, the reactor was designed to produce heat at high temperature can be used for other processes. The spherical fuel pebble contains 8335 TRISO UO2 kernel coated particles with enrichment of 10% and 17% are dispersed in a graphite matrix. The main purpose of this study was to analysis the distribution of neutron dose rates generated from HTGR-10 reactors. The calculation and analysis result of neutron dose rate in the HTGR-10 reactor core was performed using Monte Carlo MCNP5v1.6 code. The problems of double heterogeneity in kernel fuel coated particles TRISO and spherical fuel pebble in the HTGR-10 core are modelled well with MCNP5v1.6 code. The neutron flux to dose conversion factors taken from the International Commission on Radiological Protection (ICRP-74) was used to determine the dose rate that passes through the active core, reflectors, core barrel, reactor pressure vessel (RPV) and a biological shield. The calculated results of neutron dose rate with MCNP5v1.6 code using a conversion factor of ICRP-74 (2009) for radiation workers in the radial direction on the outside of the RPV (radial position = 220 cm from the center of the patio HTGR-10) provides the respective value of 9.22E-4 μSv/h and 9.58E-4 μSv/h for enrichment 10% and 17%, respectively. The calculated values of neutron dose rates are compliant with BAPETEN Chairman’s Regulation Number 4 Year 2013 on Radiation Protection and Safety in Nuclear Energy Utilization which sets the limit value for the average effective dose for radiation workers 20 mSv/year or 10μSv/h. Thus the protection and safety for radiation workers to be safe from the radiation source has been fulfilled. From the result analysis, it can be concluded that the model of calculation result of neutron dose rate for HTGR-10 core has met the required radiation safety standards.

  16. Documentation of the appearance of a caviar-type deposit in Oven 1 following a large scale experiment for heating oil with Upper Silesian coal (in German)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rank

    1942-03-26

    When the oven was disassembled after the test, small kernels of porous material were found in both the upper and lower portion of the oven to a depth of about 2 m. The kernels were of various sizes up to 4 mm. From 1,300 metric ..cap alpha..ons of dry coal, there were 330 kg or the residue of 0.025% of the coal input. These kernels brought to mind deposits of spheroidal material termed ''caviar'', since they had rounded tops. However, they were irregularly long. After multiaxis micrography, no growth rings were found as in Leuna's lignite caviar. So, it wasmore » a question of small particles consisting almost totally of ash. The majority of the composition was Al, Fe, Na, silicic acid, S and Cl. The sulfur was found to be in sulfide form and Cl in a volatile form. The remains did not turn to caviar form since the CaO content was slight. The Al, Fe, Na, silicic acid, S and Cl were concentrated in comparison to coal ash and originate apparently from the catalysts (FeSO/sub 4/, Bayermasse, and Na/sub 2/S). It was notable that the Cl content was so high. 2 graphs, 1 table« less

  17. Acute and Subchronic Toxicity of Self Nanoemulsifying Drug Delivery Systems (SNEDDS) from Chloroform Bay Leaf Extract (Eugenia Polyantha W.) with Palm Kernel Oil as A Carrier

    NASA Astrophysics Data System (ADS)

    Prihapsara, F.; Mufidah; Artanti, A. N.; Harini, M.

    2018-03-01

    The present study was aimed to study the acute and subchronic toxicity of Self Nanoemulsifying Drug Delivery Systems (SNEDDS) from chloroform bay leaf extract with Palm Kernel Oil as carrier. In acute toxicity test, five groups of rat (n=5/groups) were orally treated with Self Nanoemulsifying Drug Delivery Systems (SNEDDS) from chloroform bay leaf extract with doses at 48, 240, 1200 and 6000 mg/kg/day respectively, then the median lethal dose LD50, advers effect and mortality were recorded up to 14 days. Meanwhile, in subchronic toxicity study, 4 groups of rats (n=6/group) received by orally treatment of SNEDDS from chloroform bay leaf extract with doses at 91.75; 183.5; 367 mg/kg/day respectively for 28 days, and biochemical, hematological and histopatological change in tissue such as liver, kidney, and pancreatic were determined. The result show that LD50 is 1045.44 mg/kg. Although histopathological examination of most of the organs exhibited no structural changes, some moderate damage was observed in high‑ dose group animals (367 mg/kg/day). The high dose of SNEDDS extract has shown mild signs of toxicity on organ function test.

  18. Particle-in-cell simulations on graphic processing units

    NASA Astrophysics Data System (ADS)

    Ren, C.; Zhou, X.; Li, J.; Huang, M. C.; Zhao, Y.

    2014-10-01

    We will show our recent progress in using GPU's to accelerate the PIC code OSIRIS [Fonseca et al. LNCS 2331, 342 (2002)]. The OISRIS parallel structure is retained and the computation-intensive kernels are shipped to GPU's. Algorithms for the kernels are adapted for the GPU, including high-order charge-conserving current deposition schemes with few branching and parallel particle sorting [Kong et al., JCP 230, 1676 (2011)]. These algorithms make efficient use of the GPU shared memory. This work was supported by U.S. Department of Energy under Grant No. DE-FC02-04ER54789 and by NSF under Grant No. PHY-1314734.

  19. Handling Density Conversion in TPS.

    PubMed

    Isobe, Tomonori; Mori, Yutaro; Takei, Hideyuki; Sato, Eisuke; Tadano, Kiichi; Kobayashi, Daisuke; Tomita, Tetsuya; Sakae, Takeji

    2016-01-01

    Conversion from CT value to density is essential to a radiation treatment planning system. Generally CT value is converted to the electron density in photon therapy. In the energy range of therapeutic photon, interactions between photons and materials are dominated with Compton scattering which the cross-section depends on the electron density. The dose distribution is obtained by calculating TERMA and kernel using electron density where TERMA is the energy transferred from primary photons and kernel is a volume considering spread electrons. Recently, a new method was introduced which uses the physical density. This method is expected to be faster and more accurate than that using the electron density. As for particle therapy, dose can be calculated with CT-to-stopping power conversion since the stopping power depends on the electron density. CT-to-stopping power conversion table is also called as CT-to-water-equivalent range and is an essential concept for the particle therapy.

  20. The effect of high concentrations of glufosinate ammonium on the yield components of transgenic spring wheat (Triticum aestivum L.) constitutively expressing the bar gene.

    PubMed

    Áy, Zoltán; Mihály, Róbert; Cserháti, Mátyás; Kótai, Éva; Pauk, János

    2012-01-01

    We present an experiment done on a bar(+) wheat line treated with 14 different concentrations of glufosinate ammonium-an effective component of nonselective herbicides-during seed germination in a closed experimental system. Yield components as number of spikes per plant, number of grains per spike, thousand kernel weight, and yield per plant were thoroughly analysed and statistically evaluated after harvesting. We found that a concentration of glufosinate ammonium 5000 times the lethal dose was not enough to inhibit the germination of transgenic plants expressing the bar gene. Extremely high concentrations of glufosinate ammonium caused a bushy phenotype, significantly lower numbers of grains per spike, and thousand kernel weights. Concerning the productivity, we observed that concentrations of glufosinate ammonium 64 times the lethal dose did not lead to yield depression. Our results draw attention to the possibilities implied in the transgenic approaches.

  1. SU-E-T-104: An Examination of Dose in the Buildup and Build-Down Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tome, W; Kuo, H; Phillips, J

    2015-06-15

    Purpose: To examine dose in the buildup and build-down regions and compare measurements made with various models and dosimeters Methods: Dose was examined in a 30×30cm {sup 2} phantom of water-equivalent plastic with 10cm of backscatter for various field sizes. Examination was performed with radiochromic film and optically-stimulated-luminescent-dosimeter (OSLD) chips, and compared against a plane-parallel chamber with a correction factor applied to approximate the response of an extrapolation chamber. For the build-down region, a correction factor to account for table absorption and chamber orientation in the posterior-anterior direction was applied. The measurement depths used for the film were halfway throughmore » their sensitive volumes, and a polynomial best fit curve was used to determine the dose to their surfaces. This chamber was also compared with the dose expected in a clinical kernel-based computer model, and a clinical Boltzmann-transport-equation-based (BTE) computer model. The two models were also compared against each other for cases with air gaps in the buildup region. Results: Within 3mm, all dosimeters and models agreed with the chamber within 10% for all field sizes. At the entrance surface, film differed in comparison with the chamber from +90% to +15%, the BTE-model by +140 to +3%, and the kernel-based model by +20% to −25%, decreasing with increasing field size. At the exit surface, film differed in comparison with the chamber from −10% to −15%, the BTE-model by −53% to −50%, the kernel-based model by −55% to −57%, mostly independent of field size. Conclusion: The largest differences compared with the chamber were found at the surface for all field sizes. Differences decreased with increasing field size and increasing depth in phantom. Air gaps in the buildup region cause dose buildup to occur again post-gap, but the effect decreases with increasing phantom thickness prior to the gap.« less

  2. Development of radiation indicators to distinguish between irradiated and non-irradiated herbal medicines using HPLC and GC-MS.

    PubMed

    Kim, Min Jung; Ki, Hyeon A; Kim, Won Young; Pal, Sukdeb; Kim, Byeong Keun; Kang, Woo Suk; Song, Joon Myong

    2010-09-01

    The effects of high dose γ-irradiation on six herbal medicines were investigated using gas chromatography-mass spectrometry (GC/MS) and high-performance liquid chromatography (HPLC). Herbal medicines were irradiated at 0-50 kGy with (60)Co irradiator. HPLC was used to quantify changes of major components including glycyrrhizin, cinnamic acid, poncirin, hesperidin, berberine, and amygdalin in licorice, cinnamon bark, poncirin immature fruit, citrus unshiu peel, coptis rhizome, and apricot kernel. No significant differences were found between gamma-irradiated and non-irradiated samples with regard to the amounts of glycyrrhizin, berberine, and amygdalin. However, the contents of cinnamic acid, poncirin, and hesperidin were increased after irradiation. Volatile compounds were analyzed by GC/MS. The relative proportion of ketone in licorice was diminished after irradiation. The relative amount of hydrocarbons in irradiated cinnamon bark and apricot kernel was higher than that in non-irradiated samples. Therefore, ketone in licorice and hydrocarbons in cinnamon bark and apricot kernel can be considered radiolytic markers. Three unsaturated hydrocarbons, i.e., 1,7,10-hexadecatriene, 6,9-heptadecadiene, and 8-heptadecene, were detected only in apricot kernels irradiated at 25 and 50 kGy. These three hydrocarbons could be used as radiolytic markers to distinguish between irradiated (>25 kGy) and non-irradiated apricot kernels.

  3. BIOCHEMICAL EFFECTS IN NORMAL AND STONE FORMING RATS TREATED WITH THE RIPE KERNEL JUICE OF PLANTAIN (MUSA PARADISIACA)

    PubMed Central

    Devi, V. Kalpana; Baskar, R.; Varalakshmi, P.

    1993-01-01

    The effect of Musa paradisiaca stem kernel juice was investigated in experimental urolithiatic rats. Stone forming rats exhibited a significant elevation in the activities of two oxalate synthesizing enzymes - Glycollic acid oxidase and Lactate dehydrogenase. Deposition and excretion of stone forming constituents in kidney and urine were also increased in these rats. The enzyme activities and the level of crystalline components were lowered with the extract treatment. The extract also reduced the activities of urinary alkaline phosphatase, lactate dehydrogenase, r-glutamyl transferase, inorganic pyrophosphatase and β-glucuronidase in calculogenic rats. No appreciable changes were noticed with leucine amino peptidase activity in treated rats. PMID:22556626

  4. Regulation of maize kernel weight and carbohydrate metabolism by abscisic acid applied at the early and middle post-pollination stages in vitro.

    PubMed

    Zhang, Li; Li, Xu-Hui; Gao, Zhen; Shen, Si; Liang, Xiao-Gui; Zhao, Xue; Lin, Shan; Zhou, Shun-Li

    2017-09-01

    Abscisic acid (ABA) accumulates in plants under drought stress, but views on the role of ABA in kernel formation and abortion are not unified. The response of the developing maize kernel to exogenous ABA was investigated by excising kernels from cob sections at four days after pollination and culturing in vitro with different concentrations of ABA (0, 5, 10, 100μM). When ABA was applied at the early post-pollination stage (EPPS), significant weight loss was observed at high ABA concentration (100μM), which could be attributed to jointly affected sink capacity and activity. Endosperm cells and starch granules were decreased significantly with high concentration, and ABA inhibited the activities of soluble acid invertase and acid cell wall invertase, together with earlier attainment of peak values. When ABA was applied at the middle post-pollination stage (MPPS), kernel weight was observably reduced with high concentration and mildly increased with low concentration, which was regulated due to sink activity. The inhibitory effect of high concentration and the mild stimulatory effect of low concentration on sucrose synthase and starch synthase activities were noted, but a peak level of ADP-glucose pyrophosphorylase (AGPase) was stimulated in all ABA treatments. Interestingly, AGPase peak values were advanced by low concentration and postponed by high concentration. In addition, compared with the control, the weight of low ABA concentration treatments were not statistically significant at the two stages, whereas weight loss from high concentration applied at EPPS was considerably obvious compared with that of the MPPS, but neither led to kernel abortion. The temporal- and dose-dependent impacts of ABA reveal a complex process of maize kernel growth and development. Copyright © 2017 Elsevier GmbH. All rights reserved.

  5. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  6. Calculation of electron and isotopes dose point kernels with fluka Monte Carlo code for dosimetry in nuclear medicine therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botta, F; Di Dia, A; Pedroli, G

    The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK),more » quantifying the energy deposition all around a point isotropic source, is often the one.Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10–3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I, 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8·RCSDA and 0.9·RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8·X90 and 0.9·X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9·RCSDA and 0.9·X90 for electrons and isotopes, respectively.Results: Concerning monoenergetic electrons, within 0.8·RCSDA (where 90%–97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9·X90, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution.Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  8. Insecticidal effect and impact of fitness of three diatomaceous earths on different maize hybrids for the eco-friendly control of the invasive stored-product pest Prostephanus truncatus (Horn).

    PubMed

    Kavallieratos, Nickolas G; Athanassiou, Christos G; Peteinatos, Gerassimos G; Boukouvala, Maria C; Benelli, Giovanni

    2018-04-01

    Diatomaceous earths (DEs) are able to successfully protect grain commodities from noxious stored-product insect and mite infestations; however, their effectiveness may be moderated by the grain hybrid or variety they are applied to. There is a gap of information on the comparison of the efficacy of different DEs when are applied on different maize hybrids against Prostephanus truncatus (Horn). Therefore, here we tested three commercially available DEs (DEA-P at 75 and 150 ppm, Protect-It at 500 ppm, and PyriSec at 500 ppm) on five different maize hybrids (Calaria, Doxa, Rio Grande, Sisco, and Studio) for the control of P. truncatus adults in terms of mortality (at 7 and 14 days), progeny production, properties of the infested maize hybrids (number and weight of kernels with or without holes, number of holes per kernel) and the adherence level of the tested DEs to the kernels. DEA-P was very effective at 75 ppm while a considerable proportion of the exposed P. truncatus adults was still alive after 14 days of exposure on all maize hybrids treated with 500 ppm of Protect-It or PyriSec, even though it was 3.3 times higher than the maximal application tested dose of DEA-P. Apart from parental mortality, DEA-P was able to reduce P. truncatus progeny production in all hybrids contrary to Protect-It or PyriSec. The adherence ratios were always higher for DEA-P than Protect-It or PyriSec to all maize hybrids. The highest numbers of kernels (or weight of kernels) without holes were noticed after their treatment with DEA-P. Doxa and Sisco performed better than Calaria, Rio Grande, or Studio based on the differences found concerning the numbers of kernels without holes at treatments with DEA-P and Protect-It. Overall, the findings of our study indicate the high potentiality of DEA-P as protectant of different maize hybrids to P. truncatus infestations at low doses, a fact that could help the eco-friendly management of this noxious species in the stored-product environment.

  9. Carbon partitioning between oil and carbohydrates in developing oat (Avena sativa L.) seeds.

    PubMed

    Ekman, Asa; Hayden, Daniel M; Dehesh, Katayoon; Bülow, Leif; Stymne, Sten

    2008-01-01

    Cereals accumulate starch in the endosperm as their major energy reserve in the grain. In most cereals the embryo, scutellum, and aleurone layer are high in oil, but these tissues constitute a very small part of the total seed weight. However, in oat (Avena sativa L.) most of the oil in kernels is deposited in the same endosperm cells that accumulate starch. Thus oat endosperm is a desirable model system to study the metabolic switches responsible for carbon partitioning between oil and starch synthesis. A prerequisite for such investigations is the development of an experimental system for oat that allows for metabolic flux analysis using stable and radioactive isotope labelling. An in vitro liquid culture system, developed for detached oat panicles and optimized to mimic kernel composition during different developmental stages in planta, is presented here. This system was subsequently used in analyses of carbon partitioning between lipids and carbohydrates by the administration of 14C-labelled sucrose to two cultivars having different amounts of kernel oil. The data presented in this study clearly show that a higher amount of oil in the high-oil cultivar compared with the medium-oil cultivar was due to a higher proportion of carbon partitioning into oil during seed filling, predominantly at the earlier stages of kernel development.

  10. Evaluation of Inhaled Versus Deposited Dose Using the Exponential Dose-Response Model for Inhalational Anthrax in Nonhuman Primate, Rabbit, and Guinea Pig.

    PubMed

    Gutting, Bradford W; Rukhin, Andrey; Mackie, Ryan S; Marchette, David; Thran, Brandolyn

    2015-05-01

    The application of the exponential model is extended by the inclusion of new nonhuman primate (NHP), rabbit, and guinea pig dose-lethality data for inhalation anthrax. Because deposition is a critical step in the initiation of inhalation anthrax, inhaled doses may not provide the most accurate cross-species comparison. For this reason, species-specific deposition factors were derived to translate inhaled dose to deposited dose. Four NHP, three rabbit, and two guinea pig data sets were utilized. Results from species-specific pooling analysis suggested all four NHP data sets could be pooled into a single NHP data set, which was also true for the rabbit and guinea pig data sets. The three species-specific pooled data sets could not be combined into a single generic mammalian data set. For inhaled dose, NHPs were the most sensitive (relative lowest LD50) species and rabbits the least. Improved inhaled LD50 s proposed for use in risk assessment are 50,600, 102,600, and 70,800 inhaled spores for NHP, rabbit, and guinea pig, respectively. Lung deposition factors were estimated for each species using published deposition data from Bacillus spore exposures, particle deposition studies, and computer modeling. Deposition was estimated at 22%, 9%, and 30% of the inhaled dose for NHP, rabbit, and guinea pig, respectively. When the inhaled dose was adjusted to reflect deposited dose, the rabbit animal model appears the most sensitive with the guinea pig the least sensitive species. © 2014 Society for Risk Analysis.

  11. Increased acetylcholine esterase activity produced by the administration of an aqueous extract of the seed kernel of Thevetia peruviana and its role on acute and subchronic intoxication in mice

    PubMed Central

    Marroquín-Segura, Rubén; Calvillo-Esparza, Ricardo; Mora-Guevara, José Luis Alfredo; Tovalín-Ahumada, José Horacio; Aguilar-Contreras, Abigail; Hernández-Abad, Vicente Jesús

    2014-01-01

    Background: The real mechanism for Thevetia peruviana poisoning remains unclear. Cholinergic activity is important for cardiac function regulation, however, the effect of T. peruviana on cholinergic activity is not well-known. Objective: To study the effect of the acute administration of an aqueous extract of the seed kernel of T. peruviana on the acetylcholine esterase (AChE) activity in CD1 mice as well its implications in the sub-chronic toxicity of the extract. Materials and Methods: A dose of 100 mg/kg of the extract was administered to CD1 mice and after 7 days, serum was obtained for ceruloplasmin (CP) quantitation and liver function tests. Another group of mice received a 50 mg/kg dose of the extract 3 times within 1 h time interval and AChE activity was determined for those animals. Heart tissue histological preparation was obtained from a group of mice that received a daily 50 mg/kg dose of the extract by a 30-days period. Results: CP levels for the treated group were higher than those for the control group (Student's t-test, P ≤ 0.001). AChE activity in the treated group was significantly higher than the control group (Tukey test, control vs. T. peruviana, P ≤ 0.001). Heart tissue histological preparations showed leukocyte infiltrates and necrotic areas, consistent with infarcts. Conclusion: The increased levels of AChE and the hearth tissue infiltrative lesions induced by the aqueous seed kernel extract of T. peruviana explains in part the poisoning caused by this plant, which can be related to an inflammatory process. PMID:24914300

  12. Increased acetylcholine esterase activity produced by the administration of an aqueous extract of the seed kernel of Thevetia peruviana and its role on acute and subchronic intoxication in mice.

    PubMed

    Marroquín-Segura, Rubén; Calvillo-Esparza, Ricardo; Mora-Guevara, José Luis Alfredo; Tovalín-Ahumada, José Horacio; Aguilar-Contreras, Abigail; Hernández-Abad, Vicente Jesús

    2014-01-01

    The real mechanism for Thevetia peruviana poisoning remains unclear. Cholinergic activity is important for cardiac function regulation, however, the effect of T. peruviana on cholinergic activity is not well-known. To study the effect of the acute administration of an aqueous extract of the seed kernel of T. peruviana on the acetylcholine esterase (AChE) activity in CD1 mice as well its implications in the sub-chronic toxicity of the extract. A dose of 100 mg/kg of the extract was administered to CD1 mice and after 7 days, serum was obtained for ceruloplasmin (CP) quantitation and liver function tests. Another group of mice received a 50 mg/kg dose of the extract 3 times within 1 h time interval and AChE activity was determined for those animals. Heart tissue histological preparation was obtained from a group of mice that received a daily 50 mg/kg dose of the extract by a 30-days period. CP levels for the treated group were higher than those for the control group (Student's t-test, P ≤ 0.001). AChE activity in the treated group was significantly higher than the control group (Tukey test, control vs. T. peruviana, P ≤ 0.001). Heart tissue histological preparations showed leukocyte infiltrates and necrotic areas, consistent with infarcts. The increased levels of AChE and the hearth tissue infiltrative lesions induced by the aqueous seed kernel extract of T. peruviana explains in part the poisoning caused by this plant, which can be related to an inflammatory process.

  13. Maize kernel antioxidants and their potential involvement in Fusarium ear rot resistance.

    PubMed

    Picot, Adeline; Atanasova-Pénichon, Vessela; Pons, Sebastien; Marchegay, Gisèle; Barreau, Christian; Pinson-Gadais, Laëtitia; Roucolle, Joël; Daveau, Florie; Caron, Daniel; Richard-Forget, Florence

    2013-04-10

    The potential involvement of antioxidants (α-tocopherol, lutein, zeaxanthin, β-carotene, and ferulic acid) in the resistance of maize varieties to Fusarium ear rot was the focus of this study. These antioxidants were present in all maize kernel stages, indicating that the fumonisin-producing fungi (mainly Fusarium verticillioides and Fusarium proliferatum ) are likely to face them during ear colonization. The effect of these compounds on fumonisin biosynthesis was studied in F. verticillioides liquid cultures. In carotenoid-treated cultures, no inhibitory effect of fumonisin accumulation was observed while a potent inhibitory activity was obtained for sublethal doses of α-tocopherol (0.1 mM) and ferulic acid (1 mM). Using a set of genotypes with moderate to high susceptibility to Fusarium ear rot, ferulic acid was significantly lower in immature kernels of the very susceptible group. Such a relation was nonexistent for tocopherols and carotenoids. Also, ferulic acid in immature kernels ranged from 3 to 8.5 mg/g, i.e., at levels consistent with the in vitro inhibitory concentration. Overall, our data support the fact that ferulic acid may contribute to resistance to Fusarium ear rot and/or fumonisin accumulation.

  14. Virtual reality based adaptive dose assessment method for arbitrary geometries in nuclear facility decommissioning.

    PubMed

    Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun

    2018-05-17

    This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.

  15. Ford Motor Company NDE facility shielding design.

    PubMed

    Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H

    2005-01-01

    Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.

  16. Statistic and dosimetric criteria to assess the shift of the prescribed dose for lung radiotherapy plans when integrating point kernel models in medical physics: are we ready?

    PubMed

    Chaikh, Abdulhamid; Balosso, Jacques

    2016-12-01

    To apply the statistical bootstrap analysis and dosimetric criteria's to assess the change of prescribed dose (PD) for lung cancer to maintain the same clinical results when using new generations of dose calculation algorithms. Nine lung cancer cases were studied. For each patient, three treatment plans were generated using exactly the same beams arrangements. In plan 1, the dose was calculated using pencil beam convolution (PBC) algorithm turning on heterogeneity correction with modified batho (PBC-MB). In plan 2, the dose was calculated using anisotropic analytical algorithm (AAA) and the same PD, as plan 1. In plan 3, the dose was calculated using AAA with monitor units (MUs) obtained from PBC-MB, as input. The dosimetric criteria's include MUs, delivered dose at isocentre (Diso) and calculated dose to 95% of the target volume (D95). The bootstrap method was used to assess the significance of the dose differences and to accurately estimate the 95% confidence interval (95% CI). Wilcoxon and Spearman's rank tests were used to calculate P values and the correlation coefficient (ρ). Statistically significant for dose difference was found using point kernel model. A good correlation was observed between both algorithms types, with ρ>0.9. Using AAA instead of PBC-MB, an adjustment of the PD in the isocentre is suggested. For a given set of patients, we assessed the need to readjust the PD for lung cancer using dosimetric indices and bootstrap statistical method. Thus, if the goal is to keep on with the same clinical results, the PD for lung tumors has to be adjusted with AAA. According to our simulation we suggest to readjust the PD by 5% and an optimization for beam arrangements to better protect the organs at risks (OARs).

  17. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  18. Room model based Monte Carlo simulation study of the relationship between the airborne dose rate and the surface-deposited radon progeny.

    PubMed

    Sun, Kainan; Field, R William; Steck, Daniel J

    2010-01-01

    The quantitative relationships between radon gas concentration, the surface-deposited activities of various radon progeny, the airborne radon progeny dose rate, and various residential environmental factors were investigated through a Monte Carlo simulation study based on the extended Jacobi room model. Airborne dose rates were calculated from the unattached and attached potential alpha-energy concentrations (PAECs) using two dosimetric models. Surface-deposited (218)Po and (214)Po were significantly correlated with radon concentration, PAECs, and airborne dose rate (p-values <0.0001) in both non-smoking and smoking environments. However, in non-smoking environments, the deposited radon progeny were not highly correlated to the attached PAEC. In multiple linear regression analysis, natural logarithm transformation was performed for airborne dose rate as a dependent variable, as well as for radon and deposited (218)Po and (214)Po as predictors. In non-smoking environments, after adjusting for the effect of radon, deposited (214)Po was a significant positive predictor for one dose model (RR 1.46, 95% CI 1.27-1.67), while deposited (218)Po was a negative predictor for the other dose model (RR 0.90, 95% CI 0.83-0.98). In smoking environments, after adjusting for radon and room size, deposited (218)Po was a significant positive predictor for one dose model (RR 1.10, 95% CI 1.02-1.19), while a significant negative predictor for the other model (RR 0.90, 95% CI 0.85-0.95). After adjusting for radon and deposited (218)Po, significant increases of 1.14 (95% CI 1.03-1.27) and 1.13 (95% CI 1.05-1.22) in the mean dose rates were found for large room sizes relative to small room sizes in the different dose models.

  19. Changes in texture, colour and fatty acid composition of male and female pig shoulder fat due to different dietary fat sources.

    PubMed

    Hallenstvedt, E; Kjos, N P; Overland, M; Thomassen, M

    2012-03-01

    Two experiments with 72 slaughter pigs in each were conducted. Entire males and females were individually fed restricted. Palm kernel-, soybean- and fish-oil were used in varying combinations, giving different dietary fat levels (29-80g/kg) and iodine values ranging from 50 to 131. Shoulder fat was analysed for fatty acid composition (inner and outer layer), firmness and colour. A clear dose-response relationship was seen between fatty acids in diets and in shoulder fat. Interestingly, the very long chain n-3 fatty acids seemed to be deposited more efficiently when additional fat was included in the diet. Both high and low dietary iodine values changed towards less extreme iodine values in fat. Low-fat diets enhanced de novo synthesis of fatty acids. Males revealed a higher percentage of PUFA and a lower percentage of C18:1 and MUFA. Fat firmness, but not colour, was influenced by sex and dietary fat source. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Development of a Spect-Based Three-Dimensional Treatment Planner for Radionuclide Therapy with Iodine -131.

    NASA Astrophysics Data System (ADS)

    Giap, Huan Bosco

    Accurate calculation of absorbed dose to target tumors and normal tissues in the body is an important requirement for establishing fundamental dose-response relationships for radioimmunotherapy. Two major obstacles have been the difficulty in obtaining an accurate patient-specific 3-D activity map in-vivo and calculating the resulting absorbed dose. This study investigated a methodology for 3-D internal dosimetry, which integrates the 3-D biodistribution of the radionuclide acquired from SPECT with a dose-point kernel convolution technique to provide the 3-D distribution of absorbed dose. Accurate SPECT images were reconstructed with appropriate methods for noise filtering, attenuation correction, and Compton scatter correction. The SPECT images were converted into activity maps using a calibration phantom. The activity map was convolved with an ^{131}I dose-point kernel using a 3-D fast Fourier transform to yield a 3-D distribution of absorbed dose. The 3-D absorbed dose map was then processed to provide the absorbed dose distribution in regions of interest. This methodology can provide heterogeneous distributions of absorbed dose in volumes of any size and shape with nonuniform distributions of activity. Comparison of the activities quantitated by our SPECT methodology to true activities in an Alderson abdominal phantom (with spleen, liver, and spherical tumor) yielded errors of -16.3% to 4.4%. Volume quantitation errors ranged from -4.0 to 5.9% for volumes greater than 88 ml. The percentage differences of the average absorbed dose rates calculated by this methodology and the MIRD S-values were 9.1% for liver, 13.7% for spleen, and 0.9% for the tumor. Good agreement (percent differences were less than 8%) was found between the absorbed dose due to penetrating radiation calculated from this methodology and TLD measurement. More accurate estimates of the 3 -D distribution of absorbed dose can be used as a guide in specifying the minimum activity to be administered to patients to deliver a prescribed absorbed dose to tumor without exceeding the toxicity limits of normal tissues.

  1. Theoretical study of the influence of a heterogeneous activity distribution on intratumoral absorbed dose distribution.

    PubMed

    Bao, Ande; Zhao, Xia; Phillips, William T; Woolley, F Ross; Otto, Randal A; Goins, Beth; Hevezi, James M

    2005-01-01

    Radioimmunotherapy of hematopoeitic cancers and micrometastases has been shown to have significant therapeutic benefit. The treatment of solid tumors with radionuclide therapy has been less successful. Previous investigations of intratumoral activity distribution and studies on intratumoral drug delivery suggest that a probable reason for the disappointing results in solid tumor treatment is nonuniform intratumoral distribution coupled with restricted intratumoral drug penetrance, thus inhibiting antineoplastic agents from reaching the tumor's center. This paper describes a nonuniform intratumoral activity distribution identified by limited radiolabeled tracer diffusion from tumor surface to tumor center. This activity was simulated using techniques that allowed the absorbed dose distributions to be estimated using different intratumoral diffusion capabilities and calculated for tumors of varying diameters. The influences of these absorbed dose distributions on solid tumor radionuclide therapy are also discussed. The absorbed dose distribution was calculated using the dose point kernel method that provided for the application of a three-dimensional (3D) convolution between a dose rate kernel function and an activity distribution function. These functions were incorporated into 3D matrices with voxels measuring 0.10 x 0.10 x 0.10 mm3. At this point fast Fourier transform (FFT) and multiplication in frequency domain followed by inverse FFT (iFFT) were used to effect this phase of the dose calculation process. The absorbed dose distribution for tumors of 1, 3, 5, 10, and 15 mm in diameter were studied. Using the therapeutic radionuclides of 131I, 186Re, 188Re, and 90Y, the total average dose, center dose, and surface dose for each of the different tumor diameters were reported. The absorbed dose in the nearby normal tissue was also evaluated. When the tumor diameters exceed 15 mm, a much lower tumor center dose is delivered compared with tumors between 3 and 5 mm in diameter. Based on these findings, the use of higher beta-energy radionuclides, such as 188Re and 90Y is more effective in delivering a higher absorbed dose to the tumor center at tumor diameters around 10 mm.

  2. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters

    NASA Astrophysics Data System (ADS)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-01

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  3. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters.

    PubMed

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-21

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  4. Optimized Orthovoltage Stereotactic Radiosurgery

    NASA Astrophysics Data System (ADS)

    Fagerstrom, Jessica M.

    Because of its ability to treat intracranial targets effectively and noninvasively, stereotactic radiosurgery (SRS) is a prevalent treatment modality in modern radiation therapy. This work focused on SRS delivering rectangular function dose distributions, which are desirable for some targets such as those with functional tissue included within the target volume. In order to achieve such distributions, this work used fluence modulation and energies lower than those utilized in conventional SRS. In this work, the relationship between prescription isodose and dose gradients was examined for standard, unmodulated orthovoltage SRS dose distributions. Monte Carlo-generated energy deposition kernels were used to calculate 4pi, isocentric dose distributions for a polyenergetic orthovoltage spectrum, as well as monoenergetic orthovoltage beams. The relationship between dose gradients and prescription isodose was found to be field size and energy dependent, and values were found for prescription isodose that optimize dose gradients. Next, a pencil-beam model was used with a Genetic Algorithm search heuristic to optimize the spatial distribution of added tungsten filtration within apertures of cone collimators in a moderately filtered 250 kVp beam. Four cone sizes at three depths were examined with a Monte Carlo model to determine the effects of the optimized modulation compared to open cones, and the simulations found that the optimized cones were able to achieve both improved penumbra and flatness statistics at depth compared to the open cones. Prototypes of the filter designs calculated using mathematical optimization techniques and Monte Carlo simulations were then manufactured and inserted into custom built orthovoltage SRS cone collimators. A positioning system built in-house was used to place the collimator and filter assemblies temporarily in the 250 kVp beam line. Measurements were performed in water using radiochromic film scanned with both a standard white light flatbed scanner as well as a prototype laser densitometry system. Measured beam profiles showed that the modulated beams could more closely approach rectangular function dose profiles compared to the open cones. A methodology has been described and implemented to achieve optimized SRS delivery, including the development of working prototypes. Future work may include the construction of a full treatment platform.

  5. Total Ambient Dose Equivalent Buildup Factor Determination for Nbs04 Concrete.

    PubMed

    Duckic, Paulina; Hayes, Robert B

    2018-06-01

    Buildup factors are dimensionless multiplicative factors required by the point kernel method to account for scattered radiation through a shielding material. The accuracy of the point kernel method is strongly affected by the correspondence of analyzed parameters to experimental configurations, which is attempted to be simplified here. The point kernel method has not been found to have widespread practical use for neutron shielding calculations due to the complex neutron transport behavior through shielding materials (i.e. the variety of interaction mechanisms that neutrons may undergo while traversing the shield) as well as non-linear neutron total cross section energy dependence. In this work, total ambient dose buildup factors for NBS04 concrete are calculated in terms of neutron and secondary gamma ray transmission factors. The neutron and secondary gamma ray transmission factors are calculated using MCNP6™ code with updated cross sections. Both transmission factors and buildup factors are given in a tabulated form. Practical use of neutron transmission and buildup factors warrants rigorously calculated results with all associated uncertainties. In this work, sensitivity analysis of neutron transmission factors and total buildup factors with varying water content has been conducted. The analysis showed significant impact of varying water content in concrete on both neutron transmission factors and total buildup factors. Finally, support vector regression, a machine learning technique, has been engaged to make a model based on the calculated data for calculation of the buildup factors. The developed model can predict most of the data with 20% relative error.

  6. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    NASA Astrophysics Data System (ADS)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, J; Lindsay, P; University of Toronto, Toronto

    Purpose: Recent progress in small animal radiotherapy systems has provided the foundation for delivering the heterogeneous, millimeter scale dose distributions demanded by preclinical radiobiology investigations. Despite advances in preclinical dose planning, delivery of highly heterogeneous dose distributions is constrained by the fixed collimation systems and large x-ray focal spot common in small animal radiotherapy systems. This work proposes a dual focal spot dose optimization and delivery method with a large x-ray focal spot used to deliver homogeneous dose regions and a small focal spot to paint spatially heterogeneous dose regions. Methods: Two-dimensional dose kernels were measured for a 1 mmmore » circular collimator with radiochromic film at 10 mm depth in a solid water phantom for the small and large x-ray focal spots on a recently developed small animal microirradiator. These kernels were used in an optimization framework which segmented a desired dose distribution into low- and high-spatial frequency regions for delivery by the large and small focal spot, respectively. For each region, the method determined an optimal set of stage positions and beam-on times. The method was demonstrated by optimizing a bullseye pattern consisting of 0.75 mm radius circular target and 0.5 and 1.0 mm wide rings alternating between 0 and 2 Gy. Results: Compared to a large focal spot technique, the dual focal spot technique improved the optimized dose distribution: 69.2% of the optimized dose was within 0.5 Gy of the intended dose for the large focal spot, compared to 80.6% for the dual focal spot method. The dual focal spot design required 14.0 minutes of optimization, and will require 178.3 minutes for automated delivery. Conclusion: The dual focal spot optimization and delivery framework is a novel option for delivering conformal and heterogeneous dose distributions at the preclinical level and provides a new experimental option for unique radiobiological investigations. Funding Support: this work is supported by funding the National Sciences and Engineering Research Council of Canada, and a Mitacs-accelerate fellowship. Conflict of Interest: Dr. Lindsay and Dr. Jaffray are listed as inventors of the small animal microirradiator described herein. This system has been licensed for commercial development.« less

  8. Multi-Wavelength Spectroscopic Observations of a White Light Flare Produced Directly by Non-thermal Electrons

    NASA Astrophysics Data System (ADS)

    Lee, Kyoung-Sun; Imada, Shinsuke; Watanabe, Kyoko; Bamba, Yumi; Brooks, David

    2017-08-01

    An X1.6 flare on 2014 October 22 was observed by multiple spectrometers in UV, EUV and X-ray (Hinode/EIS, IRIS, and RHESSI), and multi-wavelength imaging observations (SDO/AIA and HMI). We analyze a bright kernel that produces a white light (WL) flare with continuum enhancement and a hard X-ray (HXR) peak. Taking advantage of the spectroscopic observations of IRIS and Hinode/EIS, we measure the temporal variation of the plasma properties in the bright kernel in the chromosphere and corona. We find that explosive evaporation was observed when the WL emission occurred. The temporal correlation of the WL emission, HXR peak, and evaporation flows indicates that the WL emission was produced by accelerated electrons. We calculated the energy flux deposited by non-thermal electrons (observed by RHESSI) and compared it to the dissipated energy estimated from a chromospheric line (Mg II triplet) observed by IRIS. The deposited energy flux from the non-thermal electrons is about (3-7.7)x1010 erg cm-2 s-1 for a given low-energy cutoff of 30-40 keV, assuming the thick-target model. The energy flux estimated from the changes in temperature in the chromosphere measured using the Mg II subordinate line is about (4.6-6.7)×109 erg cm-2 s-1: ˜6%-22% of the deposited energy. This comparison of estimated energy fluxes implies that the continuum enhancement was directly produced by the non-thermal electrons.

  9. Delivery of propellant soluble drug from a metered dose inhaler.

    PubMed Central

    Ashworth, H L; Wilson, C G; Sims, E E; Wotton, P K; Hardy, J G

    1991-01-01

    The deposition of particulate suspensions delivered from a metered dose inhaler has been investigated extensively. The distribution of propellant, delivered from a metered dose inhaler, was studied by radiolabelling it with technetium-99m hexamethylpropyleneamine oxime. Andersen sampler measurements indicated that half of the dose was associated with particles in the size range 0.5-5 microns diameter. The preparation was administered to healthy subjects by inhalation and deposition was monitored with a gamma camera. Each lung image was divided into an inner, mid, and peripheral zone. The effects on deposition of varying the size of the delivery orifice (0.46, 0.61, and 0.76 mm internal diameters) and the effect of attaching a spacer were assessed. Lung deposition was independent of the orifice size within the actuator. Without the spacer the average dose deposited in the lungs was 39%, with 15% penetrating into the peripheral part of the lungs. Attachment of the spacer to the mouth-piece increased the mean lung deposition to 57% and reduced oropharyngeal deposition. The study has shown that propellant soluble drugs can be delivered efficiently to the lungs from a metered dose inhaler. Images PMID:2038731

  10. GRAYSKY-A new gamma-ray skyshine code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witts, D.J.; Twardowski, T.; Watmough, M.H.

    1993-01-01

    This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY aremore » as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors.« less

  11. Forced Ignition Study Based On Wavelet Method

    NASA Astrophysics Data System (ADS)

    Martelli, E.; Valorani, M.; Paolucci, S.; Zikoski, Z.

    2011-05-01

    The control of ignition in a rocket engine is a critical problem for combustion chamber design. Therefore it is essential to fully understand the mechanism of ignition during its earliest stages. In this paper the characteristics of flame kernel formation and initial propagation in a hydrogen-argon-oxygen mixing layer are studied using 2D direct numerical simulations with detailed chemistry and transport properties. The flame kernel is initiated by adding an energy deposition source term in the energy equation. The effect of unsteady strain rate is studied by imposing a 2D turbulence velocity field, which is initialized by means of a synthetic field. An adaptive wavelet method, based on interpolating wavelets is used in this study to solve the compressible reactive Navier- Stokes equations. This method provides an alternative means to refine the computational grid points according to local demands of the physical solution. The present simulations show that in the very early instants the kernel perturbed by the turbulent field is characterized by an increased burning area and a slightly increased rad- ical formation. In addition, the calculations show that the wavelet technique yields a significant reduction in the number of degrees of freedom necessary to achieve a pre- scribed solution accuracy.

  12. Embryo and endosperm development in wheat (Triticum aestivum L.) kernels subjected to drought stress.

    PubMed

    Fábián, Attila; Jäger, Katalin; Rakszegi, Mariann; Barnabás, Beáta

    2011-04-01

    The aim of the present work was to reveal the histological alterations triggered in developing wheat kernels by soil drought stress during early seed development resulting in yield losses at harvest. For this purpose, observations were made on the effect of drought stress, applied in a controlled environment from the 5th to the 9th day after pollination, on the kernel morphology, starch content and grain yield of the drought-sensitive Cappelle Desprez and drought-tolerant Plainsman V winter wheat (Triticum aestivum L.) varieties. As a consequence of water withdrawal, there was a decrease in the size of the embryos and the number of A-type starch granules deposited in the endosperm, while the development of aleurone cells and the degradation of the cell layers surrounding the ovule were significantly accelerated in both genotypes. In addition, the number of B-type starch granules per cell was significantly reduced. Drought stress affected the rate of grain filling shortened the grain-filling and ripening period and severely reduced the yield. With respect to the recovery of vegetative tissues, seed set and yield, the drought-tolerant Plainsman V responded significantly better to drought stress than Cappelle Desprez. The reduction in the size of the mature embryos was significantly greater in the sensitive genotype. Compared to Plainsman V, the endosperm cells of Cappelle Desprez accumulated significantly fewer B-type starch granules. In stressed kernels of the tolerant genotype, the accumulation of protein bodies occurred significantly earlier than in the sensitive variety.

  13. A novel approach to EPID-based 3D volumetric dosimetry for IMRT and VMAT QA

    NASA Astrophysics Data System (ADS)

    Alhazmi, Abdulaziz; Gianoli, Chiara; Neppl, Sebastian; Martins, Juliana; Veloza, Stella; Podesta, Mark; Verhaegen, Frank; Reiner, Michael; Belka, Claus; Parodi, Katia

    2018-06-01

    Intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are relatively complex treatment delivery techniques and require quality assurance (QA) procedures. Pre-treatment dosimetric verification represents a fundamental QA procedure in daily clinical routine in radiation therapy. The purpose of this study is to develop an EPID-based approach to reconstruct a 3D dose distribution as imparted to a virtual cylindrical water phantom to be used for plan-specific pre-treatment dosimetric verification for IMRT and VMAT plans. For each depth, the planar 2D dose distributions acquired in air were back-projected and convolved by depth-specific scatter and attenuation kernels. The kernels were obtained by making use of scatter and attenuation models to iteratively estimate the parameters from a set of reference measurements. The derived parameters served as a look-up table for reconstruction of arbitrary measurements. The summation of the reconstructed 3D dose distributions resulted in the integrated 3D dose distribution of the treatment delivery. The accuracy of the proposed approach was validated in clinical IMRT and VMAT plans by means of gamma evaluation, comparing the reconstructed 3D dose distributions with Octavius measurement. The comparison was carried out using (3%, 3 mm) criteria scoring 99% and 96% passing rates for IMRT and VMAT, respectively. An accuracy comparable to the one of the commercial device for 3D volumetric dosimetry was demonstrated. In addition, five IMRT and five VMAT were validated against the 3D dose calculation performed by the TPS in a water phantom using the same passing rate criteria. The median passing rates within the ten treatment plans was 97.3%, whereas the lowest was 95%. Besides, the reconstructed 3D distribution is obtained without predictions relying on forward dose calculation and without external phantom or dosimetric devices. Thus, the approach provides a fully automated, fast and easy QA procedure for plan-specific pre-treatment dosimetric verification.

  14. SU-E-T-439: An Improved Formula of Scatter-To-Primary Ratio for Photon Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T

    2014-06-01

    Purpose: Scatter-to-primary ratio (SPR) is an important dosimetric quantity that describes the contribution from the scatter photons in an external photon beam. The purpose of this study is to develop an improved analytical formula to describe SPR as a function of circular field size (r) and depth (d) using Monte Carlo (MC) simulation. Methods: MC simulation was performed for Mohan photon spectra (Co-60, 4, 6, 10, 15, 23 MV) using EGSNRC code. Point-spread scatter dose kernels in water are generated. The scatter-to-primary ratio (SPR) is also calculated using MC simulation as a function of field size for circular field sizemore » with radius r and depth d. The doses from forward scatter and backscatter photons are calculated using a convolution of the point-spread scatter dose kernel and by accounting for scatter photons contributing to dose before (z'd) reaching the depth of interest, d, where z' is the location of scatter photons, respectively. The depth dependence of the ratio of the forward scatter and backscatter doses is determined as a function of depth and field size. Results: We are able to improve the existing 3-parameter (a, w, d0) empirical formula for SPR by introducing depth dependence for one of the parameter d0, which becomes 0 for deeper depths. The depth dependence of d0 can be directly calculated as a ratio of backscatter-to-forward scatter doses for otherwise the same field and depth. With the improved empirical formula, we can fit SPR for all megavoltage photon beams to within 2%. Existing 3-parameter formula cannot fit SPR data for Co-60 to better than 3.1%. Conclusion: An improved empirical formula is developed to fit SPR for all megavoltage photon energies to within 2%.« less

  15. Dosimetric effects of seed anisotropy and interseed attenuation for {sup 103}Pd and {sup 125}I prostate implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chibani, Omar; Williamson, Jeffrey F.; Todor, Dorin

    2005-08-15

    A Monte Carlo study is carried out to quantify the effects of seed anisotropy and interseed attenuation for {sup 103}Pd and {sup 125}I prostate implants. Two idealized and two real prostate implants are considered. Full Monte Carlo simulation (FMCS) of implants (seeds are physically and simultaneously simulated) is compared with isotropic point-source dose-kernel superposition (PSKS) and line-source dose-kernel superposition (LSKS) methods. For clinical pre- and post-procedure implants, the dose to the different structures (prostate, rectum wall, and urethra) is calculated. The discretized volumes of these structures are reconstructed using transrectal ultrasound contours. Local dose differences (PSKS versus FMCS and LSKSmore » versus FMCS) are investigated. The dose contributions from primary versus scattered photons are calculated separately. For {sup 103}Pd, the average absolute total dose difference between FMCS and PSKS can be as high as 7.4% for the idealized model and 6.1% for the clinical preprocedure implant. Similarly, the total dose difference is lower for the case of {sup 125}I: 4.4% for the idealized model and 4.6% for a clinical post-procedure implant. Average absolute dose differences between LSKS and FMCS are less significant for both seed models: 3 to 3.6% for the idealized models and 2.9 to 3.2% for the clinical plans. Dose differences between PSKS and FMCS are due to the absence of both seed anisotropy and interseed attenuation modeling in the PSKS approach. LSKS accounts for seed anisotropy but not for the interseed effect, leading to systematically overestimated dose values in comparison with the more accurate FMCS method. For both idealized and clinical implants the dose from scattered photons represent less than 1/3 of the total dose. For all studied cases, LSKS prostate DVHs overestimate D{sub 90} by 2 to 5% because of the missing interseed attenuation effect. PSKS and LSKS predictions of V{sub 150} and V{sub 200} are overestimated by up to 9% in comparison with the FMCS results. Finally, effects of seed anisotropy and interseed attenuation must be viewed in the context of other significant sources of dose uncertainty, namely seed orientation, source misplacement, prostate morphological changes and tissue heterogeneity.« less

  16. IRIS, Hinode, SDO, and RHESSI Observations of a White Light Flare Produced Directly by Nonthermal Electrons

    NASA Astrophysics Data System (ADS)

    Lee, Kyoung-Sun; Imada, Shinsuke; Watanabe, Kyoko; Bamba, Yumi; Brooks, David H.

    2017-02-01

    An X1.6 flare occurred in active region AR 12192 on 2014 October 22 at 14:02 UT and was observed by Hinode, IRIS, SDO, and RHESSI. We analyze a bright kernel that produces a white light (WL) flare with continuum enhancement and a hard X-ray (HXR) peak. Taking advantage of the spectroscopic observations of IRIS and Hinode/EIS, we measure the temporal variation of the plasma properties in the bright kernel in the chromosphere and corona. We find that explosive evaporation was observed when the WL emission occurred, even though the intensity enhancement in hotter lines is quite weak. The temporal correlation of the WL emission, HXR peak, and evaporation flows indicates that the WL emission was produced by accelerated electrons. To understand the WL emission process, we calculated the energy flux deposited by non-thermal electrons (observed by RHESSI) and compared it to the dissipated energy estimated from a chromospheric line (Mg II triplet) observed by IRIS. The deposited energy flux from the non-thermal electrons is about (3-7.7) × 1010 erg cm-2 s-1 for a given low-energy cutoff of 30-40 keV, assuming the thick-target model. The energy flux estimated from the changes in temperature in the chromosphere measured using the Mg II subordinate line is about (4.6-6.7) × 109 erg cm-2 s-1: ˜6%-22% of the deposited energy. This comparison of estimated energy fluxes implies that the continuum enhancement was directly produced by the non-thermal electrons.

  17. Skin dose from radionuclide contamination on clothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, D.C.; Hussein, E.M.A.; Yuen, P.S.

    1997-06-01

    Skin dose due to radio nuclide contamination on clothing is calculated by Monte Carlo simulation of electron and photon radiation transport. Contamination due to a hot particle on some selected clothing geometries of cotton garment is simulated. The effect of backscattering in the surrounding air is taken into account. For each combination of source-clothing geometry, the dose distribution function in the skin, including the dose at tissue depths of 7 mg cm{sup -2} and 1,000 Mg cm{sup -2}, is calculated by simulating monoenergetic photon and electron sources. Skin dose due to contamination by a radionuclide is then determined by propermore » weighting of & monoenergetic dose distribution functions. The results are compared with the VARSKIN point-kernel code for some radionuclides, indicating that the latter code tends to under-estimate the dose for gamma and high energy beta sources while it overestimates skin dose for low energy beta sources. 13 refs., 4 figs., 2 tabs.« less

  18. Measurement of soil contamination by radionuclides due to the Fukushima Dai-ichi Nuclear Power Plant accident and associated estimated cumulative external dose estimation.

    PubMed

    Endo, S; Kimura, S; Takatsuji, T; Nanasawa, K; Imanaka, T; Shizuma, K

    2012-09-01

    Soil sampling was carried out at an early stage of the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident. Samples were taken from areas around FDNPP, at four locations northwest of FDNPP, at four schools and in four cities, including Fukushima City. Radioactive contaminants in soil samples were identified and measured by using a Ge detector and included (129 m)Te, (129)Te, (131)I, (132)Te, (132)I, (134)Cs, (136)Cs, (137)Cs, (140)Ba and (140)La. The highest soil depositions were measured to the northwest of FDNPP. From this soil deposition data, variations in dose rates over time and the cumulative external doses at the locations for 3 months and 1y after deposition were estimated. At locations northwest of FDNPP, the external dose rate at 3 months after deposition was 4.8-98 μSv/h and the cumulative dose for 1 y was 51 to 1.0 × 10(3)mSv; the highest values were at Futaba Yamada. At the four schools, which were used as evacuation shelters, and in the four urban cities, the external dose rate at 3 months after deposition ranged from 0.03 to 3.8μSv/h and the cumulative doses for 1 y ranged from 3 to 40 mSv. The cumulative dose at Fukushima Niihama Park was estimated as the highest in the four cities. The estimated external dose rates and cumulative doses show that careful countermeasures and remediation will be needed as a result of the accident, and detailed measurements of radionuclide deposition densities in soil will be important input data to conduct these activities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. A mathematical deconvolution formulation for superficial dose distribution measurement by Cerenkov light dosimetry.

    PubMed

    Brost, Eric Edward; Watanabe, Yoichi

    2018-06-01

    Cerenkov photons are created by high-energy radiation beams used for radiation therapy. In this study, we developed a Cerenkov light dosimetry technique to obtain a two-dimensional dose distribution in a superficial region of medium from the images of Cerenkov photons by using a deconvolution method. An integral equation was derived to represent the Cerenkov photon image acquired by a camera for a given incident high-energy photon beam by using convolution kernels. Subsequently, an equation relating the planar dose at a depth to a Cerenkov photon image using the well-known relationship between the incident beam fluence and the dose distribution in a medium was obtained. The final equation contained a convolution kernel called the Cerenkov dose scatter function (CDSF). The CDSF function was obtained by deconvolving the Cerenkov scatter function (CSF) with the dose scatter function (DSF). The GAMOS (Geant4-based Architecture for Medicine-Oriented Simulations) Monte Carlo particle simulation software was used to obtain the CSF and DSF. The dose distribution was calculated from the Cerenkov photon intensity data using an iterative deconvolution method with the CDSF. The theoretical formulation was experimentally evaluated by using an optical phantom irradiated by high-energy photon beams. The intensity of the deconvolved Cerenkov photon image showed linear dependence on the dose rate and the photon beam energy. The relative intensity showed a field size dependence similar to the beam output factor. Deconvolved Cerenkov images showed improvement in dose profiles compared with the raw image data. In particular, the deconvolution significantly improved the agreement in the high dose gradient region, such as in the penumbra. Deconvolution with a single iteration was found to provide the most accurate solution of the dose. Two-dimensional dose distributions of the deconvolved Cerenkov images agreed well with the reference distributions for both square fields and a multileaf collimator (MLC) defined, irregularly shaped field. The proposed technique improved the accuracy of the Cerenkov photon dosimetry in the penumbra region. The results of this study showed initial validation of the deconvolution method for beam profile measurements in a homogeneous media. The new formulation accounted for the physical processes of Cerenkov photon transport in the medium more accurately than previously published methods. © 2018 American Association of Physicists in Medicine.

  20. Olfactory deposition of inhaled nanoparticles in humans

    PubMed Central

    Garcia, Guilherme J. M.; Schroeter, Jeffry D.; Kimbell, Julia S.

    2016-01-01

    Context Inhaled nanoparticles can migrate to the brain via the olfactory bulb, as demonstrated in experiments in several animal species. This route of exposure may be the mechanism behind the correlation between air pollution and human neurodegenerative diseases, including Alzheimer’s disease and Parkinson’s disease. Objectives This manuscript aims to (1) estimate the dose of inhaled nanoparticles that deposit in the human olfactory epithelium during nasal breathing at rest and (2) compare the olfactory dose in humans with our earlier dose estimates for rats. Materials and methods An anatomically-accurate model of the human nasal cavity was developed based on computed tomography scans. The deposition of 1–100 nm particles in the whole nasal cavity and its olfactory region were estimated via computational fluid dynamics (CFD) simulations. Our CFD methods were validated by comparing our numerical predictions for whole-nose deposition with experimental data and previous CFD studies in the literature. Results In humans, olfactory dose of inhaled nanoparticles is highest for 1–2 nm particles with approximately 1% of inhaled particles depositing in the olfactory region. As particle size grows to 100 nm, olfactory deposition decreases to 0.01% of inhaled particles. Discussion and conclusion Our results suggest that the percentage of inhaled particles that deposit in the olfactory region is lower in humans than in rats. However, olfactory dose per unit surface area is estimated to be higher in humans due to their larger minute volume. These dose estimates are important for risk assessment and dose-response studies investigating the neurotoxicity of inhaled nanoparticles. PMID:26194036

  1. Application of support vector machine for the separation of mineralised zones in the Takht-e-Gonbad porphyry deposit, SE Iran

    NASA Astrophysics Data System (ADS)

    Mahvash Mohammadi, Neda; Hezarkhani, Ardeshir

    2018-07-01

    Classification of mineralised zones is an important factor for the analysis of economic deposits. In this paper, the support vector machine (SVM), a supervised learning algorithm, based on subsurface data is proposed for classification of mineralised zones in the Takht-e-Gonbad porphyry Cu-deposit (SE Iran). The effects of the input features are evaluated via calculating the accuracy rates on the SVM performance. Ultimately, the SVM model, is developed based on input features namely lithology, alteration, mineralisation, the level and, radial basis function (RBF) as a kernel function. Moreover, the optimal amount of parameters λ and C, using n-fold cross-validation method, are calculated at level 0.001 and 0.01 respectively. The accuracy of this model is 0.931 for classification of mineralised zones in the Takht-e-Gonbad porphyry deposit. The results of the study confirm the efficiency of SVM method for classification the mineralised zones.

  2. ANALYSIS OF RESPIRATORY DEPOSITION OF INHALED PARTICLES FOR DIFFERENT DOSE METRICS: COMPARISON OF NUMBER, SURFACE AREA AND MASS DOSE OF TYPICAL AMBIENT BI-MODAL AEROSOLS

    EPA Science Inventory

    ANALYSIS OF RESPIRATORY DEPOSITION OF INHALED PARTICLES FOR DIFFERENT DOSE METRICS: COMPARISON OF NUMBER, SURFACE AREA AND MASS DOSE OF TYPICAL AMBIENT BI-MODAL AEROSOLS.
    Chong S. Kim, SC. Hu*, PA Jaques*, US EPA, National Health and Environmental Effects Research Laboratory, ...

  3. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  4. Gamma-ray dose rate surveys help investigating century-scale beach ridge progradation in the wave-dominated Catumbela delta (Angola)

    NASA Astrophysics Data System (ADS)

    Dinis, Pedro A.; Pereira, Alcides C.; Quinzeca, Domingos; Jombi, Domingos

    2017-10-01

    A strandplain at the downdrift side of the wave-dominated Catumbela delta (Angola) includes distinguishable deposits with very high natural radioactivity (up to 0.44 microSv/hour). In order to establish the geometry of these sedimentary units and understand their genetic processes, dose rate surveys were performed with the portable equipment Rados RDS-40WE. In addition, grain-size distribution, heavy-mineral composition and gamma-ray mass spectra of the high dose rate deposits were analysed. High dose rate values are found in ribbon units aligned parallel to the shoreline, which are a few tens of meters wide and up to approximately 3 km long. These units reflect the concentration of Th-bearing grains in coastal deposits enriched in heavy minerals. An integrated analysis of the high dose rate ribbons in GIS environment with aerial photography and topographic maps suggests that parts of the high dose rate units formed during the last two centuries may be related with the erosion of older shoreline deposits, due to updrift displacements of the Catumbela river outlet and recycling of shoreline accumulations with downdrift deposition. Simple gamma-ray surveys carried out with a portable detector can unravel depositional units characterised by significant enrichment in heavy-mineral grains that are likely to correspond to key events in the evolution of wave-dominated accumulations. The location of such deposits should be taken into account when planning future work using more expensive or time-consuming techniques.

  5. Design of a modulated orthovoltage stereotactic radiosurgery system.

    PubMed

    Fagerstrom, Jessica M; Bender, Edward T; Lawless, Michael J; Culberson, Wesley S

    2017-07-01

    To achieve stereotactic radiosurgery (SRS) dose distributions with sharp gradients using orthovoltage energy fluence modulation with inverse planning optimization techniques. A pencil beam model was used to calculate dose distributions from an orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods. A Genetic Algorithm search heuristic was used to optimize the spatial distribution of added tungsten filtration to achieve dose distributions with sharp dose gradients. Optimizations were performed for depths of 2.5, 5.0, and 7.5 cm, with cone sizes of 5, 6, 8, and 10 mm. In addition to the beam profiles, 4π isocentric irradiation geometries were modeled to examine dose at 0.07 mm depth, a representative skin depth, for the low energy beams. Profiles from 4π irradiations of a constant target volume, assuming maximally conformal coverage, were compared. Finally, dose deposition in bone compared to tissue in this energy range was examined. Based on the results of the optimization, circularly symmetric tungsten filters were designed to modulate the orthovoltage beam across the apertures of SRS cone collimators. For each depth and cone size combination examined, the beam flatness and 80-20% and 90-10% penumbrae were calculated for both standard, open cone-collimated beams as well as for optimized, filtered beams. For all configurations tested, the modulated beam profiles had decreased penumbra widths and flatness statistics at depth. Profiles for the optimized, filtered orthovoltage beams also offered decreases in these metrics compared to measured linear accelerator cone-based SRS profiles. The dose at 0.07 mm depth in the 4π isocentric irradiation geometries was higher for the modulated beams compared to unmodulated beams; however, the modulated dose at 0.07 mm depth remained <0.025% of the central, maximum dose. The 4π profiles irradiating a constant target volume showed improved statistics for the modulated, filtered distribution compared to the standard, open cone-collimated distribution. Simulations of tissue and bone confirmed previously published results that a higher energy beam (≥ 200 keV) would be preferable, but the 250 kVp beam was chosen for this work because it is available for future measurements. A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions with decreased flatness and penumbra statistics compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system. © 2017 American Association of Physicists in Medicine.

  6. Variability in CT lung-nodule quantification: Effects of dose reduction and reconstruction methods on density and texture based features.

    PubMed

    Lo, P; Young, S; Kim, H J; Brown, M S; McNitt-Gray, M F

    2016-08-01

    To investigate the effects of dose level and reconstruction method on density and texture based features computed from CT lung nodules. This study had two major components. In the first component, a uniform water phantom was scanned at three dose levels and images were reconstructed using four conventional filtered backprojection (FBP) and four iterative reconstruction (IR) methods for a total of 24 different combinations of acquisition and reconstruction conditions. In the second component, raw projection (sinogram) data were obtained for 33 lung nodules from patients scanned as a part of their clinical practice, where low dose acquisitions were simulated by adding noise to sinograms acquired at clinical dose levels (a total of four dose levels) and reconstructed using one FBP kernel and two IR kernels for a total of 12 conditions. For the water phantom, spherical regions of interest (ROIs) were created at multiple locations within the water phantom on one reference image obtained at a reference condition. For the lung nodule cases, the ROI of each nodule was contoured semiautomatically (with manual editing) from images obtained at a reference condition. All ROIs were applied to their corresponding images reconstructed at different conditions. For 17 of the nodule cases, repeat contours were performed to assess repeatability. Histogram (eight features) and gray level co-occurrence matrix (GLCM) based texture features (34 features) were computed for all ROIs. For the lung nodule cases, the reference condition was selected to be 100% of clinical dose with FBP reconstruction using the B45f kernel; feature values calculated from other conditions were compared to this reference condition. A measure was introduced, which the authors refer to as Q, to assess the stability of features across different conditions, which is defined as the ratio of reproducibility (across conditions) to repeatability (across repeat contours) of each feature. The water phantom results demonstrated substantial variability among feature values calculated across conditions, with the exception of histogram mean. Features calculated from lung nodules demonstrated similar results with histogram mean as the most robust feature (Q ≤ 1), having a mean and standard deviation Q of 0.37 and 0.22, respectively. Surprisingly, histogram standard deviation and variance features were also quite robust. Some GLCM features were also quite robust across conditions, namely, diff. variance, sum variance, sum average, variance, and mean. Except for histogram mean, all features have a Q of larger than one in at least one of the 3% dose level conditions. As expected, the histogram mean is the most robust feature in their study. The effects of acquisition and reconstruction conditions on GLCM features vary widely, though trending toward features involving summation of product between intensities and probabilities being more robust, barring a few exceptions. Overall, care should be taken into account for variation in density and texture features if a variety of dose and reconstruction conditions are used for the quantification of lung nodules in CT, otherwise changes in quantification results may be more reflective of changes due to acquisition and reconstruction conditions than in the nodule itself.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Havarinasab, S.; Hultman, P.

    Inorganic mercury may aggravate murine systemic autoimmune diseases which are either spontaneous (genetically determined) or induced by non-genetic mechanisms. Organic mercury species, the dominating form of mercury exposure in the human population, have not been examined in this respect. Therefore, ethyl mercury in the form of thimerosal, a preservative recently debated as a possible health hazard when present in vaccines, was administered in a dose of 0.156-5 mg/L drinking water to female (NZB x NZW)F1 (ZBWF1) mice. These mice develop an age-dependent spontaneous systemic autoimmune disease with high mortality primarily due to immune-complex (IC) glomerulonephritis. Five mg thimerosal/L drinking watermore » (295 {mu}g Hg/kg body weight (bw)/day) for 7 weeks induced glomerular, mesangial and systemic vessel wall IC deposits and antinuclear antibodies (ANA) which were not present in the untreated controls. After 22-25 weeks, the higher doses of thimerosal had shifted the localization of the spontaneously developing renal glomerular IC deposits from the capillary wall position seen in controls to the mesangium. The altered localization was associated with less severe histological kidney damage, less proteinuria, and reduced mortality. The effect was dose-dependent, lower doses having no effect compared with the untreated controls. A different effect of thimerosal treatment was induction of renal and splenic vessel walls IC deposits. Renal vessel wall deposits occurred at a dose of 0.313-5 mg thimerosal/L (18-295 {mu}g Hg/kg bw/day), while splenic vessel wall deposits developed also in mice given the lowest dose of thimerosal, 0.156 mg/L (9 {mu}g Hg/kg bw/day). The latter dose is 3- and 15-fold lower than the dose of Hg required to induce vessel wall IC deposits in genetically susceptible H-2 {sup s} mice by HgCl{sub 2} and thimerosal, respectively. Further studies on the exact conditions needed for induction of systemic IC deposits by low-dose organic mercurials in autoimmune-prone individuals, as well as the potential effect of these deposits on the vessel walls, are warranted.« less

  8. Deposition of aerosol particles in human lungs: in vivo measurements and modeling

    EPA Science Inventory

    The deposition dose and site of inhaled particles within the lung are the key determinants in health risk assessment of particulate pollutants. Accurate dose estimation, however, is a formidable task because aerosol transport and deposition in the lung are governed by many factor...

  9. Task-driven imaging in cone-beam computed tomography.

    PubMed

    Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H

    Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.

  10. Antiasthmatic activity of Moringa oleifera Lam: A clinical study

    PubMed Central

    Agrawal, Babita; Mehta, Anita

    2008-01-01

    The present study was carried out to investigate the efficacy and safety of seed kernels of Moringa oleifera in the treatment of bronchial asthma. Twenty patients of either sex with mild-to-moderate asthma were given finely powdered dried seed kernels in dose of 3 g for 3 weeks. The clinical efficacy with respect to symptoms and respiratory functions were assessed using a spirometer prior to and at the end of the treatment. Hematological parameters were not changed markedly by treatment with M. oleifera. However, the majority of patients showed a significant increase in hemoglobin (Hb) values and Erythrocyte sedimentation rate (ESR) was significantly reduced. Significant improvement was also observed in symptom score and severity of asthmatic attacks. Treatment with the drug for 3 weeks produced significant improvement in forced vital capacity, forced expiratory volume in one second, and peak expiratory flow rate values by 32.97 ± 6.03%, 30.05 ± 8.12%, and 32.09 ± 11.75%, respectively, in asthmatic subjects. Improvement was also observed in % predicted values. None of the patients showed any adverse effects with M. oleifera. The results of the present study suggest the usefulness of M. oleifera seed kernel in patients of bronchial asthma. PMID:21264158

  11. Gamma irradiation of peanut kernel to control mold growth and to diminish aflatoxin contamination

    NASA Astrophysics Data System (ADS)

    Y.-Y. Chiou, R.

    1996-09-01

    Peanut kernel inoculated with Aspergillus parasiticus conidia were gamma irradiated with 0, 2.5, 5.0 and 10 kGy using Co60. Levels higher than 2.5 kGy were effective in retarding the outgrowth of A. parasiticus and reducing the population of natural mold contaminants. However, complete elimination of these molds was not achieved even at the dose of 10 kGy. After 4 wk incubation of the inoculated kernels in a humidified condition, aflatoxins produced by the surviving A. parasiticus were 69.12, 2.42, 57.36 and 22.28 μ/g, corresponding to the original irradiation levels. Peroxide content of peanut oils prepared from the irradiated peanuts increased with increased irradiation dosage. After storage, at each irradiation level, peroxide content in peanuts stored at -14°C was lower than that in peanuts stored at an ambient temperature. TBA values and CDHP contents of the oil increased with increased irradiation dosage and changed slightly after storage. However, fatty acid contents of the peanut oil varied in a limited range as affected by the irradiation dosage and storage temperature. The SDS-PAGE protein pattern of peanuts revealed no noticeable variation of protein subunits resulting from irradiation and storage.

  12. Deposition and dose from the 18 May 1980 eruption of Mount St. Helens

    NASA Technical Reports Server (NTRS)

    Peterson, K. R.

    1982-01-01

    The downwind deposition and radiation doses was calculated for the tropospheric part of the ash cloud from the May 18, 1980 eruption of Mount St. Helens, by using a large cloud diffusion model. The naturally occurring radionnuclides of radium and thorium, whose radon daughters normally seep very slowly from the rocks and soil, were violently released to the atmosphere. The largest dose to an individual from these nuclides is small, but the population dose to those affected by the radioactivity in the ash is about 100 person rem. This population dose from Mount St. Helens is much greater than the annual person rem routinely released by a typical large nuclear power plant. It is estimated that subsequent eruptions of Mount St. Helens have doubled or tripled the person rem calculated from the initial large eruption. The long range global ash deposition of the May 18 eruption is estimated through 1984, by use of a global deposition model. The maximum deposition is nearly 1000 kg square km and occurs in the spring of 1981 over middle latitudes of the Northern Hemisphere.

  13. Topical Application of Apricot Kernel Extract Improves Dry Eye Symptoms in a Unilateral Exorbital Lacrimal Gland Excision Mouse

    PubMed Central

    Kim, Chan-Sik; Jo, Kyuhyung; Lee, Ik-Soo; Kim, Junghyun

    2016-01-01

    The purpose of this study was to investigate the therapeutic effects of topical application of apricot kernel extract (AKE) in a unilateral exorbital lacrimal gland excision mouse model of experimental dry eye. Dry eye was induced by surgical removal of the lacrimal gland. Eye drops containing 0.5 or 1 mg/mL AKE were administered twice a day from day 3 to day 7 after surgery. Tear fluid volume and corneal irregularity scores were determined. In addition, we examined the immunohistochemical expression level of Muc4. The topical administration of AKE dose-dependently improved all clinical dry eye symptoms by promoting the secretion of tear fluid and mucin. Thus, the results of this study indicate that AKE may be an efficacious topical agent for treating dry eye disease. PMID:27886047

  14. Topical Application of Apricot Kernel Extract Improves Dry Eye Symptoms in a Unilateral Exorbital Lacrimal Gland Excision Mouse.

    PubMed

    Kim, Chan-Sik; Jo, Kyuhyung; Lee, Ik-Soo; Kim, Junghyun

    2016-11-23

    The purpose of this study was to investigate the therapeutic effects of topical application of apricot kernel extract (AKE) in a unilateral exorbital lacrimal gland excision mouse model of experimental dry eye. Dry eye was induced by surgical removal of the lacrimal gland. Eye drops containing 0.5 or 1 mg/mL AKE were administered twice a day from day 3 to day 7 after surgery. Tear fluid volume and corneal irregularity scores were determined. In addition, we examined the immunohistochemical expression level of Muc4. The topical administration of AKE dose-dependently improved all clinical dry eye symptoms by promoting the secretion of tear fluid and mucin. Thus, the results of this study indicate that AKE may be an efficacious topical agent for treating dry eye disease.

  15. IRIS , Hinode , SDO , and RHESSI Observations of a White Light Flare Produced Directly by Non-thermal Electrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Kyoung-Sun; Imada, Shinsuke; Watanabe, Kyoko

    An X1.6 flare occurred in active region AR 12192 on 2014 October 22 at 14:02 UT and was observed by Hinode , IRIS , SDO , and RHESSI . We analyze a bright kernel that produces a white light (WL) flare with continuum enhancement and a hard X-ray (HXR) peak. Taking advantage of the spectroscopic observations of IRIS and Hinode /EIS, we measure the temporal variation of the plasma properties in the bright kernel in the chromosphere and corona. We find that explosive evaporation was observed when the WL emission occurred, even though the intensity enhancement in hotter lines ismore » quite weak. The temporal correlation of the WL emission, HXR peak, and evaporation flows indicates that the WL emission was produced by accelerated electrons. To understand the WL emission process, we calculated the energy flux deposited by non-thermal electrons (observed by RHESSI ) and compared it to the dissipated energy estimated from a chromospheric line (Mg ii triplet) observed by IRIS . The deposited energy flux from the non-thermal electrons is about (3–7.7) × 10{sup 10} erg cm{sup −2} s{sup −1} for a given low-energy cutoff of 30–40 keV, assuming the thick-target model. The energy flux estimated from the changes in temperature in the chromosphere measured using the Mg ii subordinate line is about (4.6–6.7) × 10{sup 9} erg cm{sup −2} s{sup −1}: ∼6%–22% of the deposited energy. This comparison of estimated energy fluxes implies that the continuum enhancement was directly produced by the non-thermal electrons.« less

  16. Acceptance Test Data for BWXT Coated Particle Batch 93164A Defective IPyC Fraction and Pyrocarbon Anisotropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmreich, Grant W.; Hunn, John D.; Skitt, Darren J.

    2017-02-01

    Coated particle fuel batch J52O-16-93164 was produced by Babcock and Wilcox Technologies (BWXT) for possible selection as fuel for the Advanced Gas Reactor Fuel Development and Qualification (AGR) Program’s AGR-5/6/7 irradiation test in the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), or may be used as demonstration production-scale coated particle fuel for other experiments. The tristructural-isotropic (TRISO) coatings were deposited in a 150-mm-diameter production-scale fluidizedbed chemical vapor deposition (CVD) furnace onto 425-μm-nominal-diameter spherical kernels from BWXT lot J52L-16-69316. Each kernel contained a mixture of 15.5%-enriched uranium carbide and uranium oxide (UCO) and was coated with four consecutive CVD layers:more » a ~50% dense carbon buffer layer with 100-μm-nominal thickness, a dense inner pyrolytic carbon (IPyC) layer with 40-μm-nominal thickness, a silicon carbide (SiC) layer with 35-μm-nominal thickness, and a dense outer pyrolytic carbon (OPyC) layer with 40-μm-nominal thickness. The TRISO-coated particle batch was sieved to upgrade the particles by removing over-sized and under-sized material, and the upgraded batch was designated by appending the letter A to the end of the batch number (i.e., 93164A).« less

  17. Effective dose rate coefficients for exposure to contaminated soil

    DOE PAGES

    Veinot, Kenneth G.; Eckerman, Keith F.; Bellamy, Michael B.; ...

    2017-05-10

    The Oak Ridge National Laboratory Center for Radiation Protection Knowledge has undertaken calculations related to various environmental exposure scenarios. A previous paper reported the results for submersion in radioactive air and immersion in water using age-specific mathematical phantoms. This paper presents age-specific effective dose rate coefficients derived using stylized mathematical phantoms for exposure to contaminated soils. Dose rate coefficients for photon, electron, and positrons of discrete energies were calculated and folded with emissions of 1252 radionuclides addressed in ICRP Publication 107 to determine equivalent and effective dose rate coefficients. The MCNP6 radiation transport code was used for organ dose ratemore » calculations for photons and the contribution of electrons to skin dose rate was derived using point-kernels. Bremsstrahlung and annihilation photons of positron emission were evaluated as discrete photons. As a result, the coefficients calculated in this work compare favorably to those reported in the US Federal Guidance Report 12 as well as by other authors who employed voxel phantoms for similar exposure scenarios.« less

  18. Effective dose rate coefficients for exposure to contaminated soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veinot, Kenneth G.; Eckerman, Keith F.; Bellamy, Michael B.

    The Oak Ridge National Laboratory Center for Radiation Protection Knowledge has undertaken calculations related to various environmental exposure scenarios. A previous paper reported the results for submersion in radioactive air and immersion in water using age-specific mathematical phantoms. This paper presents age-specific effective dose rate coefficients derived using stylized mathematical phantoms for exposure to contaminated soils. Dose rate coefficients for photon, electron, and positrons of discrete energies were calculated and folded with emissions of 1252 radionuclides addressed in ICRP Publication 107 to determine equivalent and effective dose rate coefficients. The MCNP6 radiation transport code was used for organ dose ratemore » calculations for photons and the contribution of electrons to skin dose rate was derived using point-kernels. Bremsstrahlung and annihilation photons of positron emission were evaluated as discrete photons. As a result, the coefficients calculated in this work compare favorably to those reported in the US Federal Guidance Report 12 as well as by other authors who employed voxel phantoms for similar exposure scenarios.« less

  19. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  20. Non-linear relationship of cell hit and transformation probabilities in a low dose of inhaled radon progenies.

    PubMed

    Balásházy, Imre; Farkas, Arpád; Madas, Balázs Gergely; Hofmann, Werner

    2009-06-01

    Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterise the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high, even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.

  1. MICRO DOSE ASESSMENT OF INHALED PARTICLES IN HUMAN LUNGS: A STEP CLOSER TOWARDS THE TARGET TISSUE DOSE

    EPA Science Inventory

    Rationale: Inhaled particles deposit inhomogeneously in the lung and this may result in excessive deposition dose at local regions of the lung, particularly at the anatomic sites of bifurcations and junctions of the airways, which in turn leads to injuries to the tissues and adve...

  2. Evaluation of the new electron-transport algorithm in MCNP6.1 for the simulation of dose point kernel in water

    NASA Astrophysics Data System (ADS)

    Antoni, Rodolphe; Bourgois, Laurent

    2017-12-01

    In this work, the calculation of specific dose distribution in water is evaluated in MCNP6.1 with the regular condensed history algorithm the "detailed electron energy-loss straggling logic" and the new electrons transport algorithm proposed the "single event algorithm". Dose Point Kernel (DPK) is calculated with monoenergetic electrons of 50, 100, 500, 1000 and 3000 keV for different scoring cells dimensions. A comparison between MCNP6 results and well-validated codes for electron-dosimetry, i.e., EGSnrc or Penelope, is performed. When the detailed electron energy-loss straggling logic is used with default setting (down to the cut-off energy 1 keV), we infer that the depth of the dose peak increases with decreasing thickness of the scoring cell, largely due to combined step-size and boundary crossing artifacts. This finding is less prominent for 500 keV, 1 MeV and 3 MeV dose profile. With an appropriate number of sub-steps (ESTEP value in MCNP6), the dose-peak shift is almost complete absent to 50 keV and 100 keV electrons. However, the dose-peak is more prominent compared to EGSnrc and the absorbed dose tends to be underestimated at greater depths, meaning that boundaries crossing artifact are still occurring while step-size artifacts are greatly reduced. When the single-event mode is used for the whole transport, we observe the good agreement of reference and calculated profile for 50 and 100 keV electrons. Remaining artifacts are fully vanished, showing a possible transport treatment for energies less than a hundred of keV and accordance with reference for whatever scoring cell dimension, even if the single event method initially intended to support electron transport at energies below 1 keV. Conversely, results for 500 keV, 1 MeV and 3 MeV undergo a dramatic discrepancy with reference curves. These poor results and so the current unreliability of the method is for a part due to inappropriate elastic cross section treatment from the ENDF/B-VI.8 library in those energy ranges. Accordingly, special care has to be taken in setting choice for calculating electron dose distribution with MCNP6, in particular with regards to dosimetry or nuclear medicine applications.

  3. Comparison of particulate matter dose and acute heart rate variability response in cyclists, pedestrians, bus and train passengers.

    PubMed

    Nyhan, Marguerite; McNabola, Aonghus; Misstear, Bruce

    2014-01-15

    Exposure to airborne particulate matter (PM) has been linked to cardiovascular morbidity and mortality. Heart rate variability (HRV) is a measure of the change in cardiac autonomic function, and consistent links between PM exposure and decreased HRV have been documented in studies. This study quantitatively assesses the acute relative variation of HRV with predicted PM dose in the lungs of commuters. Personal PM exposure, HR and HRV were monitored in 32 young healthy cyclists, pedestrians, bus and train passengers. Inhaled and lung deposited PM doses were determined using a numerical model of the human respiratory tract which accounted for varying ventilation rates between subjects and during commutes. Linear mixed models were used to examine air pollution dose and HRV response relationships in 122 commutes sampled. Elevated PM2.5 and PM10 inhaled and lung deposited doses were significantly (p<0.05) associated with decreased HRV indices. Percent declines in SDNN (standard deviation of normal RR intervals) relative to resting, due to an inter-quartile range increase in PM10 lung deposited dose were stronger in cyclists (-6.4%, 95% CI: -11.7, -1.3) and pedestrians (-5.8%, 95% CI: -11.3, -0.5), in comparison to bus (-3.2%, 95% CI: -6.4, -0.1) and train (-1.8%, -7.5, 3.8) passengers. A similar trend was observed in the case of PM2.5 lung deposited dose and results for rMSSD (the square root of the squared differences of successive normal RR intervals) followed similar trends to SDNN. Inhaled and lung deposited doses accounting for varying ventilation rates between modes, individuals and during commutes have been neglected in other studies relating PM to HRV. The findings here indicate that exercise whilst commuting has an influence on inhaled PM and PM lung deposited dose, and these were significantly associated with acute declines in HRV, especially in pedestrians and cyclists. © 2013.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen

    Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less

  5. Computer modeling of airway deposition distribution of Foster(®) NEXThaler(®) and Seretide(®) Diskus(®) dry powder combination drugs.

    PubMed

    Jókay, Ágnes; Farkas, Árpád; Füri, Péter; Horváth, Alpár; Tomisa, Gábor; Balásházy, Imre

    2016-06-10

    Asthma is a serious global health problem with rising prevalence and treatment costs. Due to the growing number of different types of inhalation devices and aerosol drugs, physicians often face difficulties in choosing the right medication for their patients. The main objectives of this study are (i) to elucidate the possibility and the advantages of the application of numerical modeling techniques in aerosol drug and device selection, and (ii) to demonstrate the possibility of the optimization of inhalation modes in asthma therapy with a numerical lung model by simulating patient-specific drug deposition distributions. In this study we measured inhalation parameter values of 25 healthy adult volunteers when using Foster(®) NEXThaler(®) and Seretide(®) Diskus(®). Relationships between emitted doses and patient-specific inhalation flow rates were established. Furthermore, individualized emitted particle size distributions were determined applying size distributions at measured flow rates. Based on the measured breathing parameter values, we calculated patient-specific drug deposition distributions for the active components (steroid and bronchodilator) of both drugs by the help of a validated aerosol lung deposition model adapted to therapeutic aerosols. Deposited dose fractions and deposition densities have been computed in the entire respiratory tract, in distinct anatomical regions of the airways and at the level of airway generations. We found that Foster(®) NEXThaler(®) deposits more efficiently in the lungs (average deposited steroid dose: 42.32±5.76% of the nominal emitted dose) than Seretide(®) Diskus(®) (average deposited steroid dose: 24.33±2.83% of the nominal emitted dose), but the variance of the deposition values of different individuals in the lung is significant. In addition, there are differences in the required minimal flow rates, therefore at certain patients Seretide(®) Diskus(®) or pMDIs could be a better choice. Our results show that validated computer deposition models could be useful tools in providing valuable deposition data and assisting health professionals in the personalized drug selection and delivery optimization. Patient-specific modeling could open a new horizon in the treatment of asthma towards a more effective personalized medicine in the future. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Monte-Carlo Simulation of Radiation Track Structure and Calculation of Dose Deposition in Nanovolumes

    NASA Technical Reports Server (NTRS)

    Plante, I.; Cucinotta, F. A.

    2010-01-01

    INTRODUCTION: The radiation track structure is of crucial importance to understand radiation damage to molecules and subsequent biological effects. Of a particular importance in radiobiology is the induction of double-strand breaks (DSBs) by ionizing radiation, which are caused by clusters of lesions in DNA, and oxidative damage to cellular constituents leading to aberrant signaling cascades. DSB can be visualized within cell nuclei with gamma-H2AX experiments. MATERIAL AND METHODS: In DSB induction models, the DSB probability is usually calculated by the local dose obtained from a radial dose profile of HZE tracks. In this work, the local dose imparted by HZE ions is calculated directly from the 3D Monte-Carlo simulation code RITRACKS. A cubic volume of 5 micron edge (Figure 1) is irradiated by a (Fe26+)-56 ion of 1 GeV/amu (LET approx.150 keV/micron) and by a fluence of 450 H+ ions, 300 MeV/amu (LET approx. 0.3 keV/micron). In both cases, the dose deposited in the volume is approx.1 Gy. The dose is then calculated into each 3D pixels (voxels) of 20 nm edge and visualized in 3D. RESULTS AND DISCUSSION: The dose is deposited uniformly in the volume by the H+ ions. The voxels which receive a high dose (orange) corresponds to electron track ends. The dose is deposited differently by the 56Fe26+ ion. Very high dose (red) is deposited in voxels with direct ion traversal. Voxels with electron track ends (orange) are also found distributed around the path of the track. In both cases, the appearance of the dose distribution looks very similar to DSBs seen in gammaH2AX experiments, particularly when the visualization threshold is applied. CONCLUSION: The refinement of the dose calculation to the nanometer scale has revealed important differences in the energy deposition between high- and low-LET ions. Voxels of very high dose are only found in the path of high-LET ions. Interestingly, experiments have shown that DSB induced by high-LET radiation are more difficult to repair. Therefore, this new approach may be useful to understand the nature of DSB and oxidative damage induced by ionizing radiation.

  7. Surface-deposition and Distribution of the Radon (222Rn and 220Rn) Decay Products Indoors

    NASA Astrophysics Data System (ADS)

    Espinosa, G.; Tommasino, Luigi

    The exposure to radon (222Rn and 220Rn) decay products is of great concern both in dwellings and workplaces. The model to estimate the lung dose refers to the deposition mechanisms and particle sizes. Unfortunately, most of the dose data available are based on the measurement of radon concentration and the concentration of radon decay products. These combined measurements are widely used in spite of the fact that accurate dose assessments require information on the particle deposition mechanisms and the spatial distribution of radon decay products indoors. Most of the airborne particles and/or radon decay products are deposited onto indoor surfaces, which deposition makes the radon decay products unavailable for inhalation. These deposition processes, if properly known, could be successfully exploited to reduce the exposure to radon decay products. In spite of the importance of the surface deposition of the radon decay products, both for the correct evaluation of the dose and for reducing the exposure, little or no efforts have been made to investigate these deposition processes. Recently, two parallel investigations have been carried out in Rome and at Universidad Nacional Autónoma de México (UNAM) in Mexico City respectively, which address the issue of the surface-deposited radon decay products. Even though these investigations have been carried independently, they complement one another. It is with these considerations in mind that it was decided to report both investigations in the same paper.

  8. A stochastic convolution/superposition method with isocenter sampling to evaluate intrafraction motion effects in IMRT.

    PubMed

    Naqvi, Shahid A; D'Souza, Warren D

    2005-04-01

    Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation.

  9. Variation of biometric parameters in corn cobs under the influence of nitrogen fertilization

    NASA Astrophysics Data System (ADS)

    Gigel, Prisecaru; Florin, Sala

    2017-07-01

    Biometric parameters as elements of productivity on corn cobs, along with plant density per unit area (ha) are essential in achieving production. The influence of differentiated fertilization with nitrogen was evaluated at the level of productivity elements on corn cobs, Andreea hybrid. Biometric parameters of the corn cobs (total length - L; usable length - l; uncoated length with corn kernels - lu; diameter at the base - Db, middle - Dm, and top of the corn cobs - Dt; corn cob weight - Cw, grain weight - Gw) were directly influenced by the doses of nitrogen. Regression analysis has facilitated the prediction of grain weight as the main element of productivity under different statistical certainty based on nitrogen doses (R2 = 0.962, p<0.01), on the total length of corn cobs (R2 = 0.985, p<0.01), on the usable length of corn cobs (R2 = 0.996, p<<0.001), on the diameter at the base of corn cobs (R2 = 0.824, p<0.01), on the diameter at the middle of corn cobs (R2 = 0.807, p<0.01), on uncoated length with corn kernels (R2 = 0.624, p<0.01) and on the diameter at the top of the corn cobs (R2 = 0.384, p=0.015).

  10. WE-EF-BRA-07: High Performance Preclinical Irradiation Through Optimized Dual Focal Spot Dose Painting and Online Virtual Isocenter Radiation Field Targeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, J; Princess Margaret Cancer Centre, University Health Network, Toronto, CA; Lindsay, P

    Purpose: Advances in radiotherapy practice facilitated by collimation systems to shape radiation fields and image guidance to target these conformal beams have motivated proposals for more complex dose patterns to improve the therapeutic ratio. Recent progress in small animal radiotherapy platforms has provided the foundation to validate the efficacy of such interventions, but robustly delivering heterogeneous dose distributions at the scale and accuracy demanded by preclinical studies remains challenging. This work proposes a dual focal spot optimization method to paint spatially heterogeneous dose regions and an online virtual isocenter targeting method to accurately target the dose distributions. Methods: Two-dimensional dosemore » kernels were empirically measured for the 1 mm diameter circular collimator with radiochromic film in a solid water phantom for the small and large x-ray focal spots on the X-RAD 225Cx microirradiator. These kernels were used in an optimization framework which determined a set of animal stage positions, beam-on times, and focal spot settings to optimally deliver a given desired dose distribution. An online method was developed which defined a virtual treatment isocenter based on a single image projection of the collimated radiation field. The method was demonstrated by optimization of a 6 mm circular 2 Gy target adjoining a 4 mm semicircular avoidance region. Results: The dual focal spot technique improved the optimized dose distribution with the proportion of avoidance region receiving more than 0.5 Gy reduced by 40% compared to the large focal spot technique. Targeting tests performed by irradiating ball bearing targets on radiochromic film pieced revealed the online targeting method improved the three-dimensional accuracy from 0.48 mm to 0.15 mm. Conclusion: The dual focal spot optimization and online virtual isocenter targeting framework is a robust option for delivering dose at the preclinical level and provides a new experimental option for unique radiobiological investigations This work is supported, in part, by the Natural Sciences and Engineering Research Council of Canada and a Mitacs-Accelerate fellowship. P.E. Lindsay, and D.A. Jaffray are listed as inventors of the system described herein. This system has been licensed to Precision X-Ray Inc. for commercial development.« less

  11. A combination of low-dose bevacizumab and imatinib enhances vascular normalisation without inducing extracellular matrix deposition.

    PubMed

    Schiffmann, L M; Brunold, M; Liwschitz, M; Goede, V; Loges, S; Wroblewski, M; Quaas, A; Alakus, H; Stippel, D; Bruns, C J; Hallek, M; Kashkar, H; Hacker, U T; Coutelle, O

    2017-02-28

    Vascular endothelial growth factor (VEGF)-targeting drugs normalise the tumour vasculature and improve access for chemotherapy. However, excessive VEGF inhibition fails to improve clinical outcome, and successive treatment cycles lead to incremental extracellular matrix (ECM) deposition, which limits perfusion and drug delivery. We show here, that low-dose VEGF inhibition augmented with PDGF-R inhibition leads to superior vascular normalisation without incremental ECM deposition thus maintaining access for therapy. Collagen IV expression was analysed in response to VEGF inhibition in liver metastasis of colorectal cancer (CRC) patients, in syngeneic (Panc02) and xenograft tumours of human colorectal cancer cells (LS174T). The xenograft tumours were treated with low (0.5 mg kg -1 body weight) or high (5 mg kg -1 body weight) doses of the anti-VEGF antibody bevacizumab with or without the tyrosine kinase inhibitor imatinib. Changes in tumour growth, and vascular parameters, including microvessel density, pericyte coverage, leakiness, hypoxia, perfusion, fraction of vessels with an open lumen, and type IV collagen deposition were compared. ECM deposition was increased after standard VEGF inhibition in patients and tumour models. In contrast, treatment with low-dose bevacizumab and imatinib produced similar growth inhibition without inducing detrimental collagen IV deposition, leading to superior vascular normalisation, reduced leakiness, improved oxygenation, more open vessels that permit perfusion and access for therapy. Low-dose bevacizumab augmented by imatinib selects a mature, highly normalised and well perfused tumour vasculature without inducing incremental ECM deposition that normally limits the effectiveness of VEGF targeting drugs.

  12. Dispersal, deposition and collective doses after the Chernobyl disaster.

    PubMed

    Fairlie, Ian

    2007-01-01

    This article discusses the dispersal, deposition and collective doses of the radioactive fallout from the Chernobyl accident. It explains that, although Belarus, Ukraine and Russia were heavily contaminated by the Chernobyl fallout, more than half of the fallout was deposited outside these countries, particularly in Western Europe. Indeed, about 40 per cent of the surface area of Europe was contaminated. Collective doses are predicted to result in 30,000 to 60,000 excess cancer deaths throughout the northern hemisphere, mostly in western Europe. The article also estimates that the caesium-137 source term was about a third higher than official figures.

  13. Altitudinal characteristics of atmospheric deposition of aerosols in mountainous regions: Lessons from the Fukushima Daiichi Nuclear Power Station accident.

    PubMed

    Sanada, Yukihisa; Katata, Genki; Kaneyasu, Naoki; Nakanishi, Chika; Urabe, Yoshimi; Nishizawa, Yukiyasu

    2018-03-15

    To understand the formation process of radiologically contaminated areas in eastern Japan caused by the Fukushima Daiichi Nuclear Power Station (FDNPS) accident, the deposition mechanisms over complex topography are the key factors to be investigated. To characterize the atmospheric deposition processes of radionuclides over complex mountainous topography, we investigated the altitudinal distributions of the radiocesium deposited during the accident. In five selected areas, altitudinal characteristics of the air dose rates observed using airborne surveys were analyzed. To examine the deposition mechanisms, we supplementarily used vertical profiles of radiocesium deposition in each area calculated in the latest atmospheric dispersion model. In southern Iwate, the vertical profile of the observed air dose rate was uniform regardless of altitude. In western Tochigi, the areas with the highest levels of contamination were characteristically distributed in the middle of the mountains, while in southern Fukushima, the areas with the highest contamination levels were enhanced near the summits of mountains. In central Fukushima, high air dose rates were limited to the bottoms of basin-like valley. In the region northwest of FDNPS, the air dose rate was the highest at the bottom of valley topography and decreased gradually with altitude. The simulation results showed that calculated wet deposition and observed vertical profiles of total deposition were similar in areas of southern Iwate and northwest of FDNPS qualitatively, suggesting that the dominant deposition mechanism was wet deposition. In contrast, the atmospheric dispersion model failed to reproduce either the timing of precipitation events or vertical profiles of radiocesium deposition in three other areas. Although it was difficult to elucidate the deposition mechanisms in these areas due to uncertainties of the present model results, potential mechanisms such as cloud water deposition were still proposed based on circumstantial evidences of limited meteorological data during the early stage of the accident. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Particle-in-cell simulations with charge-conserving current deposition on graphic processing units

    NASA Astrophysics Data System (ADS)

    Ren, Chuang; Kong, Xianglong; Huang, Michael; Decyk, Viktor; Mori, Warren

    2011-10-01

    Recently using CUDA, we have developed an electromagnetic Particle-in-Cell (PIC) code with charge-conserving current deposition for Nvidia graphic processing units (GPU's) (Kong et al., Journal of Computational Physics 230, 1676 (2011). On a Tesla M2050 (Fermi) card, the GPU PIC code can achieve a one-particle-step process time of 1.2 - 3.2 ns in 2D and 2.3 - 7.2 ns in 3D, depending on plasma temperatures. In this talk we will discuss novel algorithms for GPU-PIC including charge-conserving current deposition scheme with few branching and parallel particle sorting. These algorithms have made efficient use of the GPU shared memory. We will also discuss how to replace the computation kernels of existing parallel CPU codes while keeping their parallel structures. This work was supported by U.S. Department of Energy under Grant Nos. DE-FG02-06ER54879 and DE-FC02-04ER54789 and by NSF under Grant Nos. PHY-0903797 and CCF-0747324.

  15. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  16. An investigation of nonuniform dose deposition from an electron beam

    NASA Astrophysics Data System (ADS)

    Lilley, William; Luu, Kieu X.

    1994-08-01

    In a search for an explanation of nonuniform electron-beam dose deposition, the integrated tiger series (ITS) of coupled electron/photon Monte Carlo transport codes was used to calculate energy deposition in the package materials of an application-specific integrated circuit (ASIC) while the thicknesses of some of the materials were varied. The thicknesses of three materials that were in the path of an electron-beam pulse were varied independently so that analysis could determine how the radiation dose measurements using thermoluminescent dosimeters (TLD's) would be affected. The three materials were chosen because they could vary during insertion of the die into the package or during the process of taking dose measurements. The materials were aluminum, HIPEC (a plastic), and silver epoxy. The calculations showed that with very small variations in thickness, the silver epoxy had a large effect on the dose uniformity over the area of the die.

  17. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  18. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  19. ANALYSIS OF RESPIRATORY DEPOSITION OF INHALED AMBIENT AEROSOLS FOR DIFFERENT DOSE METRICS

    EPA Science Inventory

    ANALYSIS OF RESPIRATORY DEPOSITION OF INHALED AMBIENT AEROSOLS FOR DIFFERENT DOSE METRICS.
    Chong S. Kim, SC. Hu**, PA Jaques*, US EPA, National Health and Environmental Effects Research Laboratory, Research Triangle Park, NC 27711; **IIT Research Institute, Chicago, IL; *South...

  20. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  1. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  2. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize.

    PubMed

    Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.

  3. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize

    PubMed Central

    Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143

  4. Calculation of Dose Deposition in 3D Voxels by Heavy Ions

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2010-01-01

    The biological response to high-LET radiation is very different from low-LET radiation, and can be partly attributed to the energy deposition by the radiation. Several experiments, notably detection of gamma-H2AX foci by immunofluorescence, has revealed important differences in the nature and in the spatial distribution of double-strand breaks (DSB) induced by low- and high-LET radiations. Many calculations, most of which are based on amorphous track models with radial dose, have been combined with chromosome models to calculate the number and distribution of DSB within nuclei and chromosome aberrations. In this work, the Monte-Carlo track structure simulation code RITRACKS have been used to calculate directly the energy deposition in voxels (3D pixels). A cubic volume of 5 micrometers of side was irradiated by 1) 450 (1)H+ ions of 300 MeV (LET is approximately 0.3 keV/micrometer) and 2) by 1 (56)Fe26+ ion of 1 GeV/amu (LET is approximately 150 keV/micrometer). In both cases, the dose deposited in the volume is approximately 1 Gy. All energy deposition events are recorded and dose is calculated in voxels of 20 micrometers of side. The voxels are then visualized in 3D by using a color scale to represent the intensity of the dose in a voxel. This simple approach has revealed several important points which may help understand experimental observations. In both simulations, voxels which receive low dose are the most numerous, and those corresponding to electron track ends received a dose which is in the higher range. The dose voxels are distributed randomly and scattered uniformly within the volume irradiated by low-LET radiation. The distribution of the voxels shows major differences for the (56)Fe26+ ion. The track structure can still be seen, and voxels with much higher dose are found in the region corresponding to the track "core". These high-dose voxels are not found in the low-LET irradiation simulation and may be responsible for DSB that are more difficult to repair. By applying a threshold on the dose visualization, voxels corresponding to electron track ends are evidenced and the spatial distribution of voxels is very similar to the distribution of DSB observed in gamma H2AX experiments, even if no chromosomes have been included in the simulation. Furthermore, this work has shown that a significant dose is deposited in voxels corresponding to electron track ends. Since some delta-rays from iron ion can travel several millimeters, they may also be of radiobiological importance.

  5. 7 CFR 810.802 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...

  6. Exposure versus internal dose: Respiratory tract deposition modeling of inhaled asbestos fibers in rats and humans (Presentation Poster)

    EPA Science Inventory

    Exposure to asbestos is associated with respiratory diseases, including asbestosis, lung cancer and mesothelioma. Internal fiber dose depends on fiber inhalability and orientation, fiber density, length and width, and various deposition mechanisms (DM). Species-specific param...

  7. Responses to simulated nitrogen deposition by the neotropical epiphytic orchid Laelia speciosa

    PubMed Central

    Díaz-Álvarez, Edison A.; Lindig-Cisneros, Roberto

    2015-01-01

    Potential ecophysiological responses to nitrogen deposition, which is considered to be one of the leading causes for global biodiversity loss, were studied for the endangered endemic Mexican epiphytic orchid, Laelia speciosa, via a shadehouse dose-response experiment (doses were 2.5, 5, 10, 20, 40, and 80 kg N ha−1 yr−1) in order to assess the potential risk facing this orchid given impending scenarios of nitrogen deposition. Lower doses of nitrogen of up to 20 kg N ha yr−1, the dose that led to optimal plant performance, acted as fertilizer. For instance, the production of leaves and pseudobulbs were respectively 35% and 36% greater for plants receiving 20 kg N ha yr−1 than under any other dose. Also, the chlorophyll content and quantum yield peaked at 0.66 ± 0.03 g m−2 and 0.85 ± 0.01, respectively, for plants growing under the optimum dose. In contrast, toxic effects were observed at the higher doses of 40 and 80 kg N ha yr−1. The δ13C for leaves averaged −14.7 ± 0.2‰ regardless of the nitrogen dose. In turn, δ15N decreased as the nitrogen dose increased from 0.9 ± 0.1‰ under 2.5 kg N ha−1yr−1 to −3.1 ± 0.2‰ under 80 kg N ha−1yr−1, indicating that orchids preferentially assimilate NH4+ rather than NO3− of the solution under higher doses of nitrogen. Laelia speciosa showed a clear response to inputs of nitrogen, thus, increasing rates of atmospheric nitrogen deposition can pose an important threat for this species. PMID:26131375

  8. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  9. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  10. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  11. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  12. A personal computer-based, multitasking data acquisition system

    NASA Technical Reports Server (NTRS)

    Bailey, Steven A.

    1990-01-01

    A multitasking, data acquisition system was written to simultaneously collect meteorological radar and telemetry data from two sources. This system is based on the personal computer architecture. Data is collected via two asynchronous serial ports and is deposited to disk. The system is written in both the C programming language and assembler. It consists of three parts: a multitasking kernel for data collection, a shell with pull down windows as user interface, and a graphics processor for editing data and creating coded messages. An explanation of both system principles and program structure is presented.

  13. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  14. Comparison of optimized single and multifield irradiation plans of antiproton, proton and carbon ion beams.

    PubMed

    Bassler, Niels; Kantemiris, Ioannis; Karaiskos, Pantelis; Engelke, Julia; Holzscheiter, Michael H; Petersen, Jørgen B

    2010-04-01

    Antiprotons have been suggested as a possibly superior modality for radiotherapy, due to the energy released when antiprotons annihilate, which enhances the Bragg peak and introduces a high-LET component to the dose. However, concerns are expressed about the inferior lateral dose distribution caused by the annihilation products. We use the Monte Carlo code FLUKA to generate depth-dose kernels for protons, antiprotons, and carbon ions. Using these we then build virtual treatment plans optimized according to ICRU recommendations for the different beam modalities, which then are recalculated with FLUKA. Dose-volume histograms generated from these plans can be used to compare the different irradiations. The enhancement in physical and possibly biological dose from annihilating antiprotons can significantly lower the dose in the entrance channel; but only at the expense of a diffuse low dose background from long-range secondary particles. Lateral dose distributions are improved using active beam delivery methods, instead of flat fields. Dose-volume histograms for different treatment scenarios show that antiprotons have the potential to reduce the volume of normal tissue receiving medium to high dose, however, in the low dose region antiprotons are inferior to both protons and carbon ions. This limits the potential usage to situations where dose to normal tissue must be reduced as much as possible. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  15. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  16. Organ-specific SPECT activity calibration using 3D printed phantoms for molecular radiotherapy dosimetry.

    PubMed

    Robinson, Andrew P; Tipping, Jill; Cullen, David M; Hamilton, David; Brown, Richard; Flynn, Alex; Oldfield, Christopher; Page, Emma; Price, Emlyn; Smith, Andrew; Snee, Richard

    2016-12-01

    Patient-specific absorbed dose calculations for molecular radiotherapy require accurate activity quantification. This is commonly derived from Single-Photon Emission Computed Tomography (SPECT) imaging using a calibration factor relating detected counts to known activity in a phantom insert. A series of phantom inserts, based on the mathematical models underlying many clinical dosimetry calculations, have been produced using 3D printing techniques. SPECT/CT data for the phantom inserts has been used to calculate new organ-specific calibration factors for (99m) Tc and (177)Lu. The measured calibration factors are compared to predicted values from calculations using a Gaussian kernel. Measured SPECT calibration factors for 3D printed organs display a clear dependence on organ shape for (99m) Tc and (177)Lu. The observed variation in calibration factor is reproduced using Gaussian kernel-based calculation over two orders of magnitude change in insert volume for (99m) Tc and (177)Lu. These new organ-specific calibration factors show a 24, 11 and 8 % reduction in absorbed dose for the liver, spleen and kidneys, respectively. Non-spherical calibration factors from 3D printed phantom inserts can significantly improve the accuracy of whole organ activity quantification for molecular radiotherapy, providing a crucial step towards individualised activity quantification and patient-specific dosimetry. 3D printed inserts are found to provide a cost effective and efficient way for clinical centres to access more realistic phantom data.

  17. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  18. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  19. A study of structural and mechanical properties of nano-crystalline tungsten nitride film synthesis by plasma focus

    NASA Astrophysics Data System (ADS)

    Hussnain, Ali; Singh Rawat, Rajdeep; Ahmad, Riaz; Hussain, Tousif; Umar, Z. A.; Ikhlaq, Uzma; Chen, Zhong; Shen, Lu

    2015-02-01

    Nano-crystalline tungsten nitride thin films are synthesized on AISI-304 steel at room temperature using Mather-type plasma focus system. The surface properties of the exposed substrate against different deposition shots are examined for crystal structure, surface morphology and mechanical properties using X-ray diffraction (XRD), atomic force microscope, field emission scanning electron microscope and nano-indenter. The XRD results show the growth of WN and WN2 phases and the development of strain/stress in the deposited films by varying the number of deposition shots. Morphology of deposited films shows the significant change in the surface structure with different ion energy doses (number of deposition shots). Due to the effect of different ion energy doses, the strain/stress developed in the deposited film leads to an improvement of hardness of deposited films.

  20. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  1. An accurate derivation of the air dose-rate and the deposition concentration distribution by aerial monitoring in a low level contaminated area

    NASA Astrophysics Data System (ADS)

    Nishizawa, Yukiyasu; Sugita, Takeshi; Sanada, Yukihisa; Torii, Tatsuo

    2015-04-01

    Since 2011, MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan) have been conducting aerial monitoring to investigate the distribution of radioactive cesium dispersed into the atmosphere after the accident at the Fukushima Dai-ichi Nuclear Power Plant (FDNPP), Tokyo Electric Power Company. Distribution maps of the air dose-rate at 1 m above the ground and the radioactive cesium deposition concentration on the ground are prepared using spectrum obtained by aerial monitoring. The radioactive cesium deposition is derived from its dose rate, which is calculated by excluding the dose rate of the background radiation due to natural radionuclides from the air dose-rate at 1 m above the ground. The first step of the current method of calculating the dose rate due to natural radionuclides is calculate the ratio of the total count rate of areas where no radioactive cesium is detected and the count rate of regions with energy levels of 1,400 keV or higher (BG-Index). Next, calculate the air dose rate of radioactive cesium by multiplying the BG-Index and the integrated count rate of 1,400 keV or higher for the area where the radioactive cesium is distributed. In high dose-rate areas, however, the count rate of the 1,365-keV peak of Cs-134, though small, is included in the integrated count rate of 1,400 keV or higher, which could cause an overestimation of the air dose rate of natural radionuclides. We developed a method for accurately evaluating the distribution maps of natural air dose-rate by excluding the effect of radioactive cesium, even in contaminated areas, and obtained the accurate air dose-rate map attributed the radioactive cesium deposition on the ground. Furthermore, the natural dose-rate distribution throughout Japan has been obtained by this method.

  2. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  3. A TPS kernel for calculating survival vs. depth: distributions in a carbon radiotherapy beam, based on Katz's cellular Track Structure Theory.

    PubMed

    Waligórski, M P R; Grzanka, L; Korcyl, M; Olko, P

    2015-09-01

    An algorithm was developed of a treatment planning system (TPS) kernel for carbon radiotherapy in which Katz's Track Structure Theory of cellular survival (TST) is applied as its radiobiology component. The physical beam model is based on available tabularised data, prepared by Monte Carlo simulations of a set of pristine carbon beams of different input energies. An optimisation tool developed for this purpose is used to find the composition of pristine carbon beams of input energies and fluences which delivers a pre-selected depth-dose distribution profile over the spread-out Bragg peak (SOBP) region. Using an extrapolation algorithm, energy-fluence spectra of the primary carbon ions and of all their secondary fragments are obtained over regular steps of beam depths. To obtain survival vs. depth distributions, the TST calculation is applied to the energy-fluence spectra of the mixed field of primary ions and of their secondary products at the given beam depths. Katz's TST offers a unique analytical and quantitative prediction of cell survival in such mixed ion fields. By optimising the pristine beam composition to a published depth-dose profile over the SOBP region of a carbon beam and using TST model parameters representing the survival of CHO (Chinese Hamster Ovary) cells in vitro, it was possible to satisfactorily reproduce a published data set of CHO cell survival vs. depth measurements after carbon ion irradiation. The authors also show by a TST calculation that 'biological dose' is neither linear nor additive. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. In vitro evaluation of a new iterative reconstruction algorithm for dose reduction in coronary artery calcium scoring

    PubMed Central

    Allmendinger, Thomas; Kunz, Andreas S; Veyhl-Wichmann, Maike; Ergün, Süleyman; Bley, Thorsten A; Petritsch, Bernhard

    2017-01-01

    Background Coronary artery calcium (CAC) scoring is a widespread tool for cardiac risk assessment in asymptomatic patients and accompanying possible adverse effects, i.e. radiation exposure, should be as low as reasonably achievable. Purpose To evaluate a new iterative reconstruction (IR) algorithm for dose reduction of in vitro coronary artery calcium scoring at different tube currents. Material and Methods An anthropomorphic calcium scoring phantom was scanned in different configurations simulating slim, average-sized, and large patients. A standard calcium scoring protocol was performed on a third-generation dual-source CT at 120 kVp tube voltage. Reference tube current was 80 mAs as standard and stepwise reduced to 60, 40, 20, and 10 mAs. Images were reconstructed with weighted filtered back projection (wFBP) and a new version of an established IR kernel at different strength levels. Calcifications were quantified calculating Agatston and volume scores. Subjective image quality was visualized with scans of an ex vivo human heart. Results In general, Agatston and volume scores remained relatively stable between 80 and 40 mAs and increased at lower tube currents, particularly in the medium and large phantom. IR reduced this effect, as both Agatston and volume scores decreased with increasing levels of IR compared to wFBP (P < 0.001). Depending on selected parameters, radiation dose could be lowered by up to 86% in the large size phantom when selecting a reference tube current of 10 mAs with resulting Agatston levels close to the reference settings. Conclusion New iterative reconstruction kernels may allow for reduction in tube current for established Agatston scoring protocols and consequently for substantial reduction in radiation exposure. PMID:28607763

  5. Effects of neem seed derivatives on behavioral and physiological responses of the Cosmopolites sordidus (Coleoptera: Curculionidae).

    PubMed

    Musabyimana, T; Saxena, R C; Kairu, E W; Ogol, C P; Khan, Z R

    2001-04-01

    Both in a choice and multi-choice laboratory tests, fewer adults of the banana root borer, Cosmopolites sordidus (Germar), settled under the corms of the susceptible banana "Nakyetengu" treated with 5% aqueous extract of neem seed powder or cake or 2.5 and 5% emulsified neem oil than on water-treated corms. Feeding damage by larvae on banana pseudostem discs treated with 5% extract of powdered neem seed, kernel, or cake, or 5% emulsified neem oil was significantly less than on untreated discs. The larvae took much longer to locate feeding sites, initiate feeding and bore into pseudostem discs treated with extract of powdered neem seed or kernel. Few larvae survived when confined for 14 d on neem-treated banana pseudostems; the survivors weighed two to four times less than the larvae developing on untreated pseudostems. Females deposited up to 75% fewer eggs on neem-treated corms. In addition, egg hatching was reduced on neem-treated corms. The higher the concentration of neem materials the more severe the effect.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerczak, Tyler J.; Smith, Kurt R.; Petrie, Christian M.

    Tristructural-isotropic (TRISO)–coated particle fuel is a promising advanced fuel concept consisting of a spherical fuel kernel made of uranium oxide and uranium carbide, surrounded by a porous carbonaceous buffer layer and successive layers of dense inner pyrolytic carbon (IPyC), silicon carbide (SiC) deposited by chemical vapor , and dense outer pyrolytic carbon (OPyC). This fuel concept is being considered for advanced reactor applications such as high temperature gas-cooled reactors (HTGRs) and molten salt reactors (MSRs), as well as for accident-tolerant fuel for light water reactors (LWRs). Development and implementation of TRISO fuel for these reactor concepts support the US Departmentmore » of Energy (DOE) Office of Nuclear Energy mission to promote safe, reliable nuclear energy that is sustainable and environmentally friendly. During operation, the SiC layer serves as the primary barrier to metallic fission products and actinides not retained in the kernel. It has been observed that certain fission products are released from TRISO fuel during operation, notably, Ag, Eu, and Sr [1]. Release of these radioisotopes causes safety and maintenance concerns.« less

  7. Harnessing AIA Diffraction Patterns to Determine Flare Footpoint Temperatures

    NASA Astrophysics Data System (ADS)

    Bain, H. M.; Schwartz, R. A.; Torre, G.; Krucker, S.; Raftery, C. L.

    2014-12-01

    In the "Standard Flare Model" energy from accelerated electrons is deposited at the footpoints of newly reconnected flare loops, heating the surrounding plasma. Understanding the relation between the multi-thermal nature of the footpoints and the energy flux from accelerated electrons is therefore fundamental to flare physics. Extreme ultraviolet (EUV) images of bright flare kernels, obtained from the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory, are often saturated despite the implementation of automatic exposure control. These kernels produce diffraction patterns often seen in AIA images during the most energetic flares. We implement an automated image reconstruction procedure, which utilizes diffraction pattern artifacts, to de-saturate AIA images and reconstruct the flare brightness in saturated pixels. Applying this technique to recover the footpoint brightness in each of the AIA EUV passbands, we investigate the footpoint temperature distribution. Using observations from the Ramaty High Energy Solar Spectroscopic Imager (RHESSI), we will characterize the footpoint accelerated electron distribution of the flare. By combining these techniques, we investigate the relation between the nonthermal electron energy flux and the temperature response of the flare footpoints.

  8. Design of spray dried insulin microparticles to bypass deposition in the extrathoracic region and maximize total lung dose.

    PubMed

    Ung, Keith T; Rao, Nagaraja; Weers, Jeffry G; Huang, Daniel; Chan, Hak-Kim

    2016-09-25

    Inhaled drugs all too often deliver only a fraction of the emitted dose to the target lung site due to deposition in the extrathoracic region (i.e., mouth and throat), which can lead to increased variation in lung exposure, and in some instances increases in local and systemic side effects. For aerosol medications, improved targeting to the lungs may be achieved by tailoring the micromeritic properties of the particles (e.g., size, density, rugosity) to minimize deposition in the mouth-throat and maximize the total lung dose. This study evaluated a co-solvent spray drying approach to modulate particle morphology and dose delivery characteristics of engineered powder formulations of insulin microparticles. The binary co-solvent system studied included water as the primary solvent mixed with an organic co-solvent, e.g., ethanol. Factors such as the relative rate of evaporation of each component of a binary co-solvent mixture, and insulin solubility in each component were considered in selecting feedstock compositions. A water-ethanol co-solvent mixture with a composition range considered suitable for modulating particle shell formation during drying was selected for experimental investigation. An Alberta Idealized Throat model was used to evaluate the in vitro total lung dose of a series of spray dried insulin formulations engineered with different bulk powder properties and delivered with two prototype inhalers that fluidize and disperse powder using different principles. The in vitro total lung dose of insulin microparticles was improved and favored for powders with low bulk density and small primary particle size, with reduction of deposition in the extrathoracic region. The results demonstrated that a total lung dose >95% of the delivered dose can be achieved with engineered particles, indicating a high degree of lung targeting, almost completely bypassing deposition in the mouth-throat. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  10. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  11. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  12. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  13. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    NASA Astrophysics Data System (ADS)

    Nigg, D. W.; Wheeler, F. J.

    1981-01-01

    A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and the capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.

  14. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, D.W.; Wheeler, F.J.

    1981-01-01

    A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and themore » capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.« less

  15. Genotype-based dosage of acenocoumarol in highly-sensitive geriatric patients.

    PubMed

    Lozano, Roberto; Franco, María-Esther; López, Luis; Moneva, Juan-José; Carrasco, Vicente; Pérez-Layo, Maria-Angeles

    2015-03-01

    Our aim was to determinate the acenocoumarol dose requirement in highly sensitive geriatric patients, based on a minimum of genotype (VKORC1 and CYP2C9) data. We used a Gaussian kernel density estimation test to identify patients highly sensitive to the drug and PHARMACHIP®-Cuma test (Progenika Biopharma, SA, Grifols, Spain) to determine the CYP2C9 and VKORC1 genotype. All highly sensitive geriatric patients were taking ≤5.6 mg/week of acenocoumarol (AC), and 86% of these patients presented the following genotypes: CYP2C9*1/*3 or CYP2C9*1/*2 plus VKORC1 A/G, CYP2C9*3/*3, or VKORC1 A/A. VKORC1 A and CYP2C9*2 and/or *3 allelic variants extremely influence on AC dose requirement of highly sensitive geriatric patients. These patients display acenocoumarol dose requirement of ≤5.6 mg/week.

  16. Accuracy and variability of texture-based radiomics features of lung lesions across CT imaging conditions

    NASA Astrophysics Data System (ADS)

    Zheng, Yuese; Solomon, Justin; Choudhury, Kingshuk; Marin, Daniele; Samei, Ehsan

    2017-03-01

    Texture analysis for lung lesions is sensitive to changing imaging conditions but these effects are not well understood, in part, due to a lack of ground-truth phantoms with realistic textures. The purpose of this study was to explore the accuracy and variability of texture features across imaging conditions by comparing imaged texture features to voxel-based 3D printed textured lesions for which the true values are known. The seven features of interest were based on the Grey Level Co-Occurrence Matrix (GLCM). The lesion phantoms were designed with three shapes (spherical, lobulated, and spiculated), two textures (homogenous and heterogeneous), and two sizes (diameter < 1.5 cm and 1.5 cm < diameter < 3 cm), resulting in 24 lesions (with a second replica of each). The lesions were inserted into an anthropomorphic thorax phantom (Multipurpose Chest Phantom N1, Kyoto Kagaku) and imaged using a commercial CT system (GE Revolution) at three CTDI levels (0.67, 1.42, and 5.80 mGy), three reconstruction algorithms (FBP, IR-2, IR-4), four reconstruction kernel types (standard, soft, edge), and two slice thicknesses (0.6 mm and 5 mm). Another repeat scan was performed. Texture features from these images were extracted and compared to the ground truth feature values by percent relative error. The variability across imaging conditions was calculated by standard deviation across a certain imaging condition for all heterogeneous lesions. The results indicated that the acquisition method has a significant influence on the accuracy and variability of extracted features and as such, feature quantities are highly susceptible to imaging parameter choices. The most influential parameters were slice thickness and reconstruction kernels. Thin slice thickness and edge reconstruction kernel overall produced more accurate and more repeatable results. Some features (e.g., Contrast) were more accurately quantified under conditions that render higher spatial frequencies (e.g., thinner slice thickness and sharp kernels), while others (e.g., Homogeneity) showed more accurate quantification under conditions that render smoother images (e.g., higher dose and smoother kernels). Care should be exercised is relating texture features between cases of varied acquisition protocols, with need to cross calibration dependent on the feature of interest.

  17. Potential application of metal nanoparticles for dosimetric systems: Concepts and perspectives

    NASA Astrophysics Data System (ADS)

    Guidelli, Eder José; Baffa, Oswaldo

    2014-11-01

    Metallic nanoparticles increase the delivered dose and consequently enhance tissue radio sensitization during radiation therapy of cancer. The Dose Enhancement Factor (DEF) corresponds to the ratio between the dose deposited on a tissue containing nanoparticles, and the dose deposited on a tissue without nanoparticles. In this sense, we have used electron spin resonance spectroscopy (ESR) to investigate how silver and gold nanoparticles affect the dose deposition in alanine dosimeters, which act as a surrogate of soft tissue. Besides optimizing radiation absorption by the dosimeter, the optical properties of these metal nanoparticles could also improve light emission from materials employed as radiation detectors. Therefore, we have also examined how the plasmonic properties of noble metal nanoparticles could enhance radiation detection using optically stimulated luminescence (OSL) dosimetry. This work will show results on how the use of gold and silver nanoparticles are beneficial for the ESR and OSL dosimetric techniques, and will describe the difficulties we have been facing, the challenges to overcome, and the perspectives.

  18. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  19. Regional deposition of mometasone furoate nasal spray suspension in humans.

    PubMed

    Shah, Samir A; Berger, Robert L; McDermott, John; Gupta, Pranav; Monteith, David; Connor, Alyson; Lin, Wu

    2015-01-01

    Nasal deposition studies can demonstrate whether nasal sprays treating allergic rhinitis and polyposis reach the ciliated posterior nasal cavity, where turbinate inflammation and other pathology occurs. However, quantifying nasal deposition is challenging, because in vitro tests do not correlate to human nasal deposition; gamma scintigraphy studies are thus used. For valid data, the radiolabel must distribute, as the drug, into different-sized droplets, remain associated with the drug in the formulation after administration, and not alter its deposition. Some nasal deposition studies have demonstrated this using homogenous solutions. However, most commercial nasal sprays are heterogeneous suspensions. Using mometasone furoate nasal suspension (MFS), we developed a technique to validate radiolabel deposition as a surrogate for nasal cavity drug deposition and characterized regional deposition and nasal clearance in humans. Mometasone furoate (MF) formulation was spiked with diethylene triamine pentacaetic acid. Both unlabeled and radiolabeled formulations (n = 3) were sprayed into a regionally divided nasal cast. Drug deposition was quantified by high pressure liquid chromatography within each region; radiolabel deposition was determined by gamma camera. Healthy subjects (n = 12) were dosed and imaged for six hours. Scintigraphic images were coregistered with magnetic resonance imaging scans to quantify anterior and posterior nasal cavity deposition and mucociliary clearance. The ratio of radiolabel to unlabeled drug was 1.05 in the nasal cast and regionally appeared to match, indicating that in vivo radiolabel deposition could represent drug deposition. In humans, MFS delivered 86% (9.2) of metered dose to the nasal cavity, approximately 60% (9.1) of metered dose to the posterior nasal cavity. After 15 minutes, mucociliary clearance removed 59% of the initial radiolabel in the nasal cavity, consistent with clearance rates from the ciliated posterior surface. MFS deposited significant drug into the posterior nasal cavity. Both nasal cast validation and mucociliary clearance confirm the radiolabel deposition distribution method accurately represented corticosteroid nasal deposition.

  20. Regional deposition of mometasone furoate nasal spray suspension in humans.

    PubMed

    Shah, S A; Berger, R L; McDermott, J; Gupta, P; Monteith, D; Connor, A; Lin, W

    2014-11-21

    Nasal deposition studies can demonstrate whether nasal sprays treating allergic rhinitis and polyposis reach the ciliated posterior nasal cavity, where turbinate inflammation and other pathology occurs. However, quantifying nasal deposition is challenging, because in vitro tests do not correlate to human nasal deposition; gamma scintigraphy studies are thus used. For valid data, the radiolabel must distribute, as the drug, into different-sized droplets, remain associated with the drug in the formulation after administration, and not alter its deposition. Some nasal deposition studies have demonstrated this using homogenous solutions. However, most commercial nasal sprays are heterogeneous suspensions. Using mometasone furoate nasal suspension (MFS), we developed a technique to validate radiolabel deposition as a surrogate for nasal cavity drug deposition and characterized regional deposition and nasal clearance in humans. Mometasone furoate (MF) formulation was spiked with diethylene triamine pentacaetic acid. Both unlabeled and radiolabeled formulations (n = 3) were sprayed into a regionally divided nasal cast. Drug deposition was quantified by high pressure liquid chromatography within each region; radiolabel deposition was determined by gamma camera. Healthy subjects (n = 12) were dosed and imaged for six hours. Scintigraphic images were coregistered with magnetic resonance imaging scans to quantify anterior and posterior nasal cavity deposition and mucociliary clearance. The ratio of radiolabel to unlabeled drug was 1.05 in the nasal cast and regionally appeared to match, indicating that in vivo radiolabel deposition could represent drug deposition. In humans, MFS delivered 86% (9.2) of metered dose to the nasal cavity, approximately 60% (9.1) of metered dose to the posterior nasal cavity. After 15 minutes, mucociliary clearance removed 59% of the initial radiolabel in the nasal cavity, consistent with clearance rates from the ciliated posterior surface. MFS deposited significant drug into the posterior nasal cavity. Both nasal cast validation and mucociliary clearance confirm the radiolabel deposition distribution method accurately represented corticosteroid nasal deposition.

  1. 7 CFR 810.2202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...

  2. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  3. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...

  4. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  5. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    PubMed

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Estimation of Effective Dose from External Exposure in The Six Prefectures adjacent to Fukushima Prefecture

    NASA Astrophysics Data System (ADS)

    Miyatake, Hirokazu; Yoshizawa, Nobuaki; Hirakawa, Sachiko; Murakami, Kana; Takizawa, Mari; Kawai, Masaki; Sato, Osamu; Takagi, Shunji; Suzuki, Gen

    2017-09-01

    The Fukushima Daiichi Nuclear Power Plant accident caused a release of radionuclides. Radionuclides were deposited on the ground not only in Fukushima prefecture but also in nearby prefectures. Since the accident, measurement of radiation in environment such as air dose rate and deposition density of radionuclides has been performed by many organizations and universities. In particular, Japan Atomic Energy Agency (JAEA) has been performing observations of air dose rate using a car-borne survey system continuously and over wide areas. In our study, using the data measured by JAEA, we estimated effective dose from external exposure in the six prefectures adjacent to Fukushima prefecture. Since car-borne survey was started a few months later after the accident, measured air dose rate in this method is mainly contributed by 137Cs and 134Cs whose half-lives are relatively long. Therefore, based on air dose rate of 137Cs and 134Cs and the ratio of deposition density of short-half-life nuclides to that of 137Cs and 134Cs, we also estimated effective dose contributed from not only 137Cs and 134Cs but also other short-half-life nuclides. We compared the effective dose estimated by the method above with that of UNSCEAR and measured data using personal dosimeters in some areas.

  7. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  8. SU-F-I-73: Surface Dose from KV Diagnostic Beams From An On-Board Imager On a Linac Machine Using Different Imaging Techniques and Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Hossain, S; Syzek, E

    Purpose: To quantitatively investigate the surface dose deposited in patients imaged with a kV on-board-imager mounted on a radiotherapy machine using different clinical imaging techniques and filters. Methods: A high sensitivity photon diode is used to measure the surface dose on central-axis and at an off-axis-point which is mounted on the top of a phantom setup. The dose is measured for different imaging techniques that include: AP-Pelvis, AP-Head, AP-Abdomen, AP-Thorax, and Extremity. The dose measurements from these imaging techniques are combined with various filtering techniques that include: no-filter (open-field), half-fan bowtie (HF), full-fan bowtie (FF) and Cu-plate filters. The relativemore » surface dose for different imaging and filtering techniques is evaluated quantiatively by the ratio of the dose relative to the Cu-plate filter. Results: The lowest surface dose is deposited with the Cu-plate filter. The highest surface dose deposited results from open fields without filter and it is nearly a factor of 8–30 larger than the corresponding imaging technique with the Cu-plate filter. The AP-Abdomen technique delivers the largest surface dose that is nearly 2.7 times larger than the AP-Head technique. The smallest surface dose is obtained from the Extremity imaging technique. Imaging with bowtie filters decreases the surface dose by nearly 33% in comparison with the open field. The surface doses deposited with the HF or FF-bowtie filters are within few percentages. Image-quality of the radiographic images obtained from the different filtering techniques is similar because the Cu-plate eliminates low-energy photons. The HF- and FF-bowtie filters generate intensity-gradients in the radiographs which affects image-quality in the different imaging technique. Conclusion: Surface dose from kV-imaging decreases significantly with the Cu-plate and bowtie-filters compared to imaging without filters using open-field beams. The use of Cu-plate filter does not affect image-quality and may be used as the default in the different imaging techniques.« less

  9. The impact of new Geant4-DNA cross section models on electron track structure simulations in liquid water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyriakou, I., E-mail: ikyriak@cc.uoi.gr; Šefl, M.; Department of Dosimetry and Application of Ionizing Radiation, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, 115 19 Prague

    The most recent release of the open source and general purpose Geant4 Monte Carlo simulation toolkit (Geant4 10.2 release) contains a new set of physics models in the Geant4-DNA extension for improving the modelling of low-energy electron transport in liquid water (<10 keV). This includes updated electron cross sections for excitation, ionization, and elastic scattering. In the present work, the impact of these developments to track-structure calculations is examined for providing the first comprehensive comparison against the default physics models of Geant4-DNA. Significant differences with the default models are found for the average path length and penetration distance, as well asmore » for dose-point-kernels for electron energies below a few hundred eV. On the other hand, self-irradiation absorbed fractions for tissue-like volumes and low-energy electron sources (including some Auger emitters) reveal rather small differences (up to 15%) between these new and default Geant4-DNA models. The above findings indicate that the impact of the new developments will mainly affect those applications where the spatial pattern of interactions and energy deposition of very-low energy electrons play an important role such as, for example, the modelling of the chemical and biophysical stage of radiation damage to cells.« less

  10. Implementation of new physics models for low energy electrons in liquid water in Geant4-DNA.

    PubMed

    Bordage, M C; Bordes, J; Edel, S; Terrissol, M; Franceries, X; Bardiès, M; Lampe, N; Incerti, S

    2016-12-01

    A new alternative set of elastic and inelastic cross sections has been added to the very low energy extension of the Geant4 Monte Carlo simulation toolkit, Geant4-DNA, for the simulation of electron interactions in liquid water. These cross sections have been obtained from the CPA100 Monte Carlo track structure code, which has been a reference in the microdosimetry community for many years. They are compared to the default Geant4-DNA cross sections and show better agreement with published data. In order to verify the correct implementation of the CPA100 cross section models in Geant4-DNA, simulations of the number of interactions and ranges were performed using Geant4-DNA with this new set of models, and the results were compared with corresponding results from the original CPA100 code. Good agreement is observed between the implementations, with relative differences lower than 1% regardless of the incident electron energy. Useful quantities related to the deposited energy at the scale of the cell or the organ of interest for internal dosimetry, like dose point kernels, are also calculated using these new physics models. They are compared with results obtained using the well-known Penelope Monte Carlo code. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Modeling of an industrial environment: external dose calculations based on Monte Carlo simulations of photon transport.

    PubMed

    Kis, Zoltán; Eged, Katalin; Voigt, Gabriele; Meckbach, Reinhard; Müller, Heinz

    2004-02-01

    External gamma exposures from radionuclides deposited on surfaces usually result in the major contribution to the total dose to the public living in urban-industrial environments. The aim of the paper is to give an example for a calculation of the collective and averted collective dose due to the contamination and decontamination of deposition surfaces in a complex environment based on the results of Monte Carlo simulations. The shielding effects of the structures in complex and realistic industrial environments (where productive and/or commercial activity is carried out) were computed by the use of Monte Carlo method. Several types of deposition areas (walls, roofs, windows, streets, lawn) were considered. Moreover, this paper gives a summary about the time dependence of the source strengths relative to a reference surface and a short overview about the mechanical and chemical intervention techniques which can be applied in this area. An exposure scenario was designed based on a survey of average German and Hungarian supermarkets. In the first part of the paper the air kermas per photon per unit area due to each specific deposition area contaminated by 137Cs were determined at several arbitrary locations in the whole environment relative to a reference value of 8.39 x 10(-4) pGy per gamma m(-2). The calculations provide the possibility to assess the whole contribution of a specific deposition area to the collective dose, separately. According to the current results, the roof and the paved area contribute the most part (approximately 92%) to the total dose in the first year taking into account the relative contamination of the deposition areas. When integrating over 10 or 50 y, these two surfaces remain the most important contributors as well but the ratio will increasingly be shifted in favor of the roof. The decontamination of the roof and the paved area results in about 80-90% of the total averted collective dose in each calculated time period (1, 10, 50 y).

  12. Dose Calculations for [131I] Meta-Iodobenzylguanidine-Induced Bystander Effects

    PubMed Central

    Gow, M. D.; Seymour, C. B.; Boyd, M.; Mairs, R. J.; Prestiwch, W. V.; Mothersill, C. E.

    2014-01-01

    Targeted radiotherapy is a potentially useful treatment for some cancers and may be potentiated by bystander effects. However, without estimation of absorbed dose, it is difficult to compare the effects with conventional external radiation treatment. Methods: Using the Vynckier – Wambersie dose point kernel, a model for dose rate evaluation was created allowing for calculation of absorbed dose values to two cell lines transfected with the noradrenaline transporter (NAT) gene and treated with [131I]MIBG. Results: The mean doses required to decrease surviving fractions of UVW/NAT and EJ138/NAT cells, which received medium from [131I]MIBG-treated cells, to 25 – 30% were 1.6 and 1.7 Gy respectively. The maximum mean dose rates achieved during [131I]MIBG treatment were 0.09 – 0.75 Gy/h for UVW/NAT and 0.07 – 0.78 Gy/h for EJ138/NAT. These were significantly lower than the external beam gamma radiation dose rate of 15 Gy/h. In the case of control lines which were incapable of [131I]MIBG uptake the mean absorbed doses following radiopharmaceutical were 0.03 – 0.23 Gy for UVW and 0.03 – 0.32 Gy for EJ138. Conclusion: [131I]MIBG treatment for ICCM production elicited a bystander dose-response profile similar to that generated by external beam gamma irradiation but with significantly greater cell death. PMID:24659931

  13. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  14. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  15. Field Investigation of the Surface-deposited Radon Progeny as a Possible Predictor of the Airborne Radon Progeny Dose Rate

    PubMed Central

    Sun, Kainan; Steck, Daniel J.; Field, R. William

    2009-01-01

    The quantitative relationships between radon gas concentration, the surface-deposited activities of various radon progeny, the airborne radon progeny dose rate, and various residential environmental factors were investigated through actual field measurements in 38 selected Iowa houses occupied by either smokers or nonsmokers. Airborne dose rate was calculated from unattached and attached potential alpha energy concentrations (PAECs) using two dosimetric models with different activity-size weighting factors. These models are labeled Pdose and Jdose, respectively. Surface-deposited 218Po and 214Po were found significantly correlated to radon, unattached PAEC, and both airborne dose rates (p < 0.0001) in nonsmoking environments. However, deposited 218Po was not significantly correlated to the above parameters in smoking environments. In multiple linear regression analysis, natural logarithm transformation was performed for airborne dose rate as the dependent variable, as well as for radon and deposited 218Po and 214Po as predictors. An interaction effect was found between deposited 214Po and an obstacle in front of the Retrospective Reconstruction Detector (RRD) in predicting dose rate (p = 0.049 and 0.058 for Pdose and Jdose, respectively) for nonsmoking environments. After adjusting for radon and deposited radon progeny effects, the presence of either cooking, usage of a fireplace, or usage of a ceiling fan significantly, or marginal significantly, reduced the Pdose to 0.65 (90% CI 0.42–0.996), 0.54 (90% CI 0.28–1.02) and 0.66 (90% CI 0.45–0.96), respectively. For Jdose, only the usage of a ceiling fan significantly reduced the dose rate to 0.57 (90% CI 0.39–0.85). In smoking environments, deposited 218Po was a significant negative predictor for Pdose (RR 0.68, 90% CI 0.55–0.84) after adjusting for long-term 222Rn and environmental factors. A significant decrease of 0.72 (90% CI 0.64–0.83) in the mean Pdose was noted, after adjusting for the radon and radon progeny effects and other environmental factors, for every 10 increasing cigarettes smoked in the room. A significant increase of 1.71 in the mean Pdose was found for large room size relative to small room size (90% CI 1.08–2.79) after adjusting for the radon and radon progeny effects as well as other environmental factors. Fireplace usage was found to significantly increase the mean Pdose to 1.71 (90% CI 1.20–2.45) after adjusting for other factors. PMID:19590273

  16. Design of CT reconstruction kernel specifically for clinical lung imaging

    NASA Astrophysics Data System (ADS)

    Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.

    2005-04-01

    In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.

  17. Quality changes in macadamia kernel between harvest and farm-gate.

    PubMed

    Walton, David A; Wallace, Helen M

    2011-02-01

    Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.

  18. Investigation of mine and industry dumps in the FRG in relation to a possible release of natural radioactive elements.

    PubMed

    Schmitz, J

    1985-10-01

    More than 350 dumps of mines and industries in two federal states of the FRG were recorded, measured radiometrically, evaluated, and some of them sampled. Most of the mine dumps belonged to old and smaller residues from lead/zinc and iron ore mining, while the largest depositions contain tailings of modern ore beneficiation or flyash disposal. All mine dumps from uranium exploration in Baden-Württemberg and Bavaria were investigated. The highest doses, up to 100 mSv/a, were found on the piles of the uranium exploration. These depositions, which are supervised and licensed, are followed, in terms of surface dose, by the old uncontrolled mine dumps of silver/cobalt mining with doses up to 20 mSv/a. The numerous porphyry and granite quarries show doses between 1 and 2 mSv/a, as do flyash and slag dumps. The lowest doses were found on the dumps of the hydrothermal Pb/Zn and iron ore deposits, while the slag piles of iron ore processing showed higher thorium values. Assays for Ra-226 and Pb-210 of the materials deposited confirmed the radiometric results. Analyses of seepage waters and gallery waters showed only very few values exceeding the derived drinking water concentrations.

  19. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  20. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  1. Lung dosimetry for inhaled long-lived radionuclides and radon progeny.

    PubMed

    Hussain, M; Winkler-Heil, R; Hofmann, W

    2011-05-01

    The current version of the stochastic lung dosimetry model IDEAL-DOSE considers deposition in the whole tracheobronchial (TB) and alveolar airway system, while clearance is restricted to TB airways. For the investigation of doses produced by inhaled long-lived radionuclides (LLR) together with short-lived radon progeny, alveolar clearance has to be considered. Thus, present dose calculations are based on the average transport rates proposed for the revision of the ICRP human respiratory tract model. The results obtained indicate that LLR cleared from the alveolar region can deliver up to two to six times higher doses to the TB region when compared with the doses from directly deposited particles. Comparison of LLR doses with those of short-lived radon progeny indicates that LLR in uranium mines can deliver up to 5 % of the doses predicted for the short-lived radon daughters.

  2. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  3. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  4. Effect of Embolization Material in the Calculation of Dose Deposition in Arteriovenous Malformations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De la Cruz, O. O. Galvan; Moreno-Jimenez, S.; Larraga-Gutierrez, J. M.

    2010-12-07

    In this work it is studied the impact of the incorporation of high Z materials (embolization material) in the dose calculation for stereotactic radiosurgery treatment for arteriovenous malformations. A statistical analysis is done to establish the variables that may impact in the dose calculation. To perform the comparison pencil beam (PB) and Monte Carlo (MC) calculation algorithms were used. The comparison between both dose calculations shows that PB overestimates the dose deposited. The statistical analysis, for the quantity of patients of the study (20), shows that the variable that may impact in the dose calculation is the volume of themore » high Z material in the arteriovenous malformation. Further studies have to be done to establish the clinical impact with the radiosurgery result.« less

  5. Kontekst stratygraficzny zastosowania różnych odmian metody termoluminescencyjnej w datowaniu lessów z terenu Polski południowo-wschodniej i Ukrainy północno-zachodniej

    NASA Astrophysics Data System (ADS)

    Kusiak, Jarosław

    2008-01-01

    Loess profiles contain a complex but usually incomplete sequence of deposits. In order to chronologically organize deposit layers accessible in different exposures it is necessary to use absolute dating methods. The 14C, TL and OSL methods are widely used for dating of the Upper Pleistocene deposits whereas to older Pleistocene deposits only luminescence methods are applied. Some attempts are made to use the OSL method for dating of the deposits older than the Upper Pleistocene. However, the OSL ages seem to be consistently lower than the TL ages, and also considerably underestimated with reference to stratigraphic interpretation. This fact indicates that the TL method should be used above all. The possibility of TL dating of loesses is connected with their aeolian origin. The obtained TL age should correspond to geological time when mineral grains constituting deposit were exposed to sunlight before deposition. Such exactly condition is met in case of loess deposits. There are many variants of thermoluminescence method because different measuring procedures can be used. Depending on the used procedure, the TL ages obtained for the same sample can be considerably different. The manner of equivalent dose determination is decisive for the obtained TL ages. The factors influencing the value of equivalent dose are presented in this paper. The equivalent dose is determined by comparison of thermoluminescence measured for a given sample with thermoluminescence of the same sample after irradiation in laboratory with known doses of ionizing radiation. The following criteria should be taken into account: size of mineral grains, relation between thermoluminescence and heating temperature, way of reduction of unstable thermoluminescence, and the results of plateau test. The variant of thermoluminescence method used in the TL Laboratory of the Department of Physical Geography and Palaeogeography, Maria Curie-Skłodowska University in Lublin is as follows. The dose rate is determined by gamma spectrometry. The equivalent dose is determined by the total-bleach technique for the 45-63 μm fraction. Blue light obtained using the BG-28 filter is applied. Samples are preheated at 160°C for 3 hours before measurement. Light sum is read as the maximum height of glow curve. The application of such measurement procedure allows reliable dating of climatic episodes recorded in loess deposits not only related to the last glacial but also in older ones.

  6. Particokinetics: computational analysis of the superparamagnetic iron oxide nanoparticles deposition process

    PubMed Central

    Cárdenas, Walter HZ; Mamani, Javier B; Sibov, Tatiana T; Caous, Cristofer A; Amaro, Edson; Gamarra, Lionel F

    2012-01-01

    Background Nanoparticles in suspension are often utilized for intracellular labeling and evaluation of toxicity in experiments conducted in vitro. The purpose of this study was to undertake a computational modeling analysis of the deposition kinetics of a magnetite nanoparticle agglomerate in cell culture medium. Methods Finite difference methods and the Crank–Nicolson algorithm were used to solve the equation of mass transport in order to analyze concentration profiles and dose deposition. Theoretical data were confirmed by experimental magnetic resonance imaging. Results Different behavior in the dose fraction deposited was found for magnetic nanoparticles up to 50 nm in diameter when compared with magnetic nanoparticles of a larger diameter. Small changes in the dispersion factor cause variations of up to 22% in the dose deposited. The experimental data confirmed the theoretical results. Conclusion These findings are important in planning for nanomaterial absorption, because they provide valuable information for efficient intracellular labeling and control toxicity. This model enables determination of the in vitro transport behavior of specific magnetic nanoparticles, which is also relevant to other models that use cellular components and particle absorption processes. PMID:22745539

  7. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    PubMed

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  8. Proposed linear energy transfer areal detector for protons using radiochromic film.

    PubMed

    Mayer, Rulon; Lin, Liyong; Fager, Marcus; Douglas, Dan; McDonough, James; Carabe, Alejandro

    2015-04-01

    Radiation therapy depends on predictably and reliably delivering dose to tumors and sparing normal tissues. Protons with kinetic energy of a few hundred MeV can selectively deposit dose to deep seated tumors without an exit dose, unlike x-rays. The better dose distribution is attributed to a phenomenon known as the Bragg peak. The Bragg peak is due to relatively high energy deposition within a given distance or high Linear Energy Transfer (LET). In addition, biological response to radiation depends on the dose, dose rate, and localized energy deposition patterns or LET. At present, the LET can only be measured at a given fixed point and the LET spatial distribution can only be inferred from calculations. The goal of this study is to develop and test a method to measure LET over extended areas. Traditionally, radiochromic films are used to measure dose distribution but not for LET distribution. We report the first use of these films for measuring the spatial distribution of the LET deposited by protons. The radiochromic film sensitivity diminishes for large LET. A mathematical model correlating the film sensitivity and LET is presented to justify relating LET and radiochromic film relative sensitivity. Protons were directed parallel to radiochromic film sandwiched between solid water slabs. This study proposes the scaled-normalized difference (SND) between the Treatment Planning system (TPS) and measured dose as the metric describing the LET. The SND is correlated with a Monte Carlo (MC) calculation of the LET spatial distribution for a large range of SNDs. A polynomial fit between the SND and MC LET is generated for protons having a single range of 20 cm with narrow Bragg peak. Coefficients from these fitted polynomial fits were applied to measured proton dose distributions with a variety of ranges. An identical procedure was applied to the protons deposited from Spread Out Bragg Peak and modulated by 5 cm. Gamma analysis is a method for comparing the calculated LET with the LET measured using radiochromic film at the pixel level over extended areas. Failure rates using gamma analysis are calculated for areas in the dose distribution using parameters of 25% of MC LET and 3 mm. The processed dose distributions find 5%-10% failure rates for the narrow 12.5 and 15 cm proton ranges and 10%-15% for proton ranges of 15, 17.5, and 20 cm and modulated by 5 cm. It is found through gamma analysis that the measured proton energy deposition in radiochromic film and TPS can be used to determine LET. This modified film dosimetry provides an experimental areal LET measurement that can verify MC calculations, support LET point measurements, possibly enhance biologically based proton treatment planning, and determine the polymerization process within the radiochromic film.

  9. TH-CD-201-06: Experimental Characterization of Acoustic Signals Generated in Water Following Clinical Photon and Electron Beam Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hickling, S; El Naqa, I

    Purpose: Previous work has demonstrated the detectability of acoustic waves induced following the irradiation of high density metals with radiotherapy linac photon beams. This work demonstrates the ability to experimentally detect such acoustic signals following both photon and electron irradiation in a more radiotherapy relevant material. The relationship between induced acoustic signal properties in water and the deposited dose distribution is explored, and the feasibility of exploiting such signals for radiotherapy dosimetry is demonstrated. Methods: Acoustic waves were experimentally induced in a water tank via the thermoacoustic effect following a single pulse of photon or electron irradiation produced by amore » clinical linac. An immersion ultrasound transducer was used to detect these acoustic waves in water and signals were read out on an oscilloscope. Results: Peaks and troughs in the detected acoustic signals were found to correspond to the location of gradients in the deposited dose distribution following both photon and electron irradiation. Signal amplitude was linearly related to the dose per pulse deposited by photon or electron beams at the depth of detection. Flattening filter free beams induced large acoustic signals, and signal amplitude decreased with depth after the depth of maximum dose. Varying the field size resulted in a temporal shift of the acoustic signal peaks and a change in the detected signal frequency. Conclusion: Acoustic waves can be detected in a water tank following irradiation by linac photon and electron beams with basic electronics, and have characteristics related to the deposited dose distribution. The physical location of dose gradients and the amount of dose deposited can be inferred from the location and magnitude of acoustic signal peaks. Thus, the detection of induced acoustic waves could be applied to photon and electron water tank and in vivo dosimetry. This work was supported in part by CIHR grants MOP-114910 and MOP-136774. S.H. acknowledges support by the NSERC CREATE Medical Physics Research Training Network grant 432290.« less

  10. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.

    PubMed

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-21

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  11. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  12. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Study on Energy Productivity Ratio (EPR) at palm kernel oil processing factory: case study on PT-X at Sumatera Utara Plantation

    NASA Astrophysics Data System (ADS)

    Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.

    2018-02-01

    The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.

  14. Postimplant dosimetry using a Monte Carlo dose calculation engine: a new clinical standard.

    PubMed

    Carrier, Jean-François; D'Amours, Michel; Verhaegen, Frank; Reniers, Brigitte; Martin, André-Guy; Vigneault, Eric; Beaulieu, Luc

    2007-07-15

    To use the Monte Carlo (MC) method as a dose calculation engine for postimplant dosimetry. To compare the results with clinically approved data for a sample of 28 patients. Two effects not taken into account by the clinical calculation, interseed attenuation and tissue composition, are being specifically investigated. An automated MC program was developed. The dose distributions were calculated for the target volume and organs at risk (OAR) for 28 patients. Additional MC techniques were developed to focus specifically on the interseed attenuation and tissue effects. For the clinical target volume (CTV) D(90) parameter, the mean difference between the clinical technique and the complete MC method is 10.7 Gy, with cases reaching up to 17 Gy. For all cases, the clinical technique overestimates the deposited dose in the CTV. This overestimation is mainly from a combination of two effects: the interseed attenuation (average, 6.8 Gy) and tissue composition (average, 4.1 Gy). The deposited dose in the OARs is also overestimated in the clinical calculation. The clinical technique systematically overestimates the deposited dose in the prostate and in the OARs. To reduce this systematic inaccuracy, the MC method should be considered in establishing a new standard for clinical postimplant dosimetry and dose-outcome studies in a near future.

  15. 131 iodine gamma dose determination in the thyroid gland using two geometrical shapes: a comparative study

    NASA Astrophysics Data System (ADS)

    Betka, A.; Bentabet, A.; Azbouche, A.; Fenineche, N.; Adjiri, A.; Dib, A.

    2015-05-01

    In order to study the internal gamma dose, we used a Monte Carlo code ‘Penelope’ simulation with two geometrical models (cylindrical and spherical). The deposited energy was determined via the loss of energy calculated from the quantum theory for inelastic collisions based on the first-order (plane-wave) Born approximation for charged particles with individual atoms and molecules. Our results show that the cylindrical geometry is more suitable for carrying out such a study. Moreover, we developed an analytical expression for the 131 iodine gamma dose (the energy deposited per photon absorbed dose). This latter could be considered as an important tool for evaluating the gamma dose without going through stochastic models.

  16. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  17. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  18. Dose Deposition Profiles in Untreated Brick Material

    DOE PAGES

    O'Mara, Ryan; Hayes, Robert

    2018-04-01

    In nuclear forensics or accident dosimetry, building materials such as bricks can be used to retrospectively determine radiation fields using thermoluminescence and/or optically stimu-lated luminescence. A major problem with brick material is that significant chemical processing is generally necessary to isolate the quartz from the brick. In this study, a simplified treatment process has been tested in an effort to lessen the processing burden for retrospective dosimetry studies. It was found that by using thermoluminescence responses, the dose deposition profile of a brick sample could be reconstructed without any chemical treat-ment. This method was tested by estimating the gamma-ray ener-giesmore » of an 241Am source from the dose deposition in a brick. The results demonstrated the ability to retrospectively measure the source energy with an overall energy resolution of approximately 6 keV. This technique has the potential to greatly expedite dose re-constructions in the wake of nuclear accidents or for any related application where doses of interest are large compared to overall process system noise.« less

  19. Dose Deposition Profiles in Untreated Brick Material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Mara, Ryan; Hayes, Robert

    In nuclear forensics or accident dosimetry, building materials such as bricks can be used to retrospectively determine radiation fields using thermoluminescence and/or optically stimu-lated luminescence. A major problem with brick material is that significant chemical processing is generally necessary to isolate the quartz from the brick. In this study, a simplified treatment process has been tested in an effort to lessen the processing burden for retrospective dosimetry studies. It was found that by using thermoluminescence responses, the dose deposition profile of a brick sample could be reconstructed without any chemical treat-ment. This method was tested by estimating the gamma-ray ener-giesmore » of an 241Am source from the dose deposition in a brick. The results demonstrated the ability to retrospectively measure the source energy with an overall energy resolution of approximately 6 keV. This technique has the potential to greatly expedite dose re-constructions in the wake of nuclear accidents or for any related application where doses of interest are large compared to overall process system noise.« less

  20. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  1. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons

    NASA Astrophysics Data System (ADS)

    Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.

    2016-02-01

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  2. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons.

    PubMed

    Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A

    2016-02-21

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  3. Structural analysis of ion-implanted chemical-vapor-deposited diamond by transmission electron microscope

    NASA Astrophysics Data System (ADS)

    Jiang, N.; Deguchi, M.; Wang, C. L.; Won, J. H.; Jeon, H. M.; Mori, Y.; Hatta, A.; Kitabatake, M.; Ito, T.; Hirao, T.; Sasaki, T.; Hiraki, A.

    1997-04-01

    A transmission electron microscope (TEM) study of ion-implanted chemical-vapor-deposited (CVD) diamond is presented. CVD diamond used for transmission electron microscope observation was directly deposited onto Mo TEM grids. As-deposited specimens were irradiated by C (100 keV) ions at room temperature with a wide range of implantation doses (10 12-10 17/cm 2). Transmission electron diffraction (TED) patterns indicate that there exists a critical dose ( Dc) for the onset of amorphization of CVD diamond as a result of ion induced damage and the value of critical dose is confirmed to be about 3 × 10 15/cm 2. The ion-induced transformation process is clearly revealed by high resolution electron microscope (HREM) images. For a higher dose implantation (7 × 10 15/cm 2) a large amount of diamond phase is transformed into amorphous carbon and many tiny misoriented diamond blocks are found to be left in the amorphous solid. The average size of these misoriented diamond blocks is only about 1-2 nm. Further bombardment (10 17/cm 2) almost kills all of the diamond phase within the irradiated volume and moreover leads to local formation of micropolycrystalline graphite.

  4. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Calculation of Dose Deposition in 3D Voxels by Heavy Ions and Simulation of gamma-H2AX Experiments

    NASA Technical Reports Server (NTRS)

    Plante, I.; Ponomarev, A. L.; Wang, M.; Cucinotta, F. A.

    2011-01-01

    The biological response to high-LET radiation is different from low-LET radiation due to several factors, notably difference in energy deposition and formation of radiolytic species. Of particular importance in radiobiology is the formation of double-strand breaks (DSB), which can be detected by -H2AX foci experiments. These experiments has revealed important differences in the spatial distribution of DSB induced by low- and high-LET radiations [1,2]. To simulate -H2AX experiments, models based on amorphous track with radial dose are often combined with random walk chromosome models [3,4]. In this work, a new approach using the Monte-Carlo track structure code RITRACKS [5] and chromosome models have been used to simulate DSB formation. At first, RITRACKS have been used to simulate the irradiation of a cubic volume of 5 m by 1) 450 1H+ ions of 300 MeV (LET 0.3 keV/ m) and 2) by 1 56Fe26+ ion of 1 GeV/amu (LET 150 keV/ m). All energy deposition events are recorded to calculate dose in voxels of 20 m. The dose voxels are distributed randomly and scattered uniformly within the volume irradiated by low-LET radiation. Many differences are found in the spatial distribution of dose voxels for the 56Fe26+ ion. The track structure can be distinguished, and voxels with very high dose are found in the region corresponding to the track "core". These high-dose voxels are not found in the low-LET irradiation simulation and indicate clustered energy deposition, which may be responsible for complex DSB. In the second step, assuming that DSB will be found only in voxels where energy is deposited by the radiation, the intersection points between voxels with dose > 0 and simulated chromosomes were obtained. The spatial distribution of the intersection points is similar to -H2AX foci experiments. These preliminary results suggest that combining stochastic track structure and chromosome models could be a good approach to understand radiation-induced DSB and chromosome aberrations.

  6. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  7. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  8. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...

  9. Imaging and characterization of primary and secondary radiation in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Granja, Carlos; Martisikova, Maria; Jakubek, Jan; Opalka, Lukas; Gwosch, Klaus

    2016-07-01

    Imaging in ion beam therapy is an essential and increasingly significant tool for treatment planning and radiation and dose deposition verification. Efforts aim at providing precise radiation field characterization and online monitoring of radiation dose distribution. A review is given of the research and methodology of quantum-imaging, composition, spectral and directional characterization of the mixed-radiation fields in proton and light ion beam therapy developed by the IEAP CTU Prague and HIT Heidelberg group. Results include non-invasive imaging of dose deposition and primary beam online monitoring.

  10. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  11. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  12. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...

  13. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  14. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  15. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  16. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  17. Modeling adaptive kernels from probabilistic phylogenetic trees.

    PubMed

    Nicotra, Luca; Micheli, Alessio

    2009-01-01

    Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.

  18. Aflatoxin and nutrient contents of peanut collected from local market and their processed foods

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Rahmianna, A. A.; Yusnawan, E.

    2018-01-01

    Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.

  19. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  20. Immune complexes with cationic antibodies deposit in glomeruli more effectively than cationic antibodies alone.

    PubMed

    Mannik, M; Gauthier, V J; Stapleton, S A; Agodoa, L Y

    1987-06-15

    In previously published studies, highly cationized antibodies alone and in immune complexes bound to glomeruli by charge-charge interaction, but only immune complexes persisted in glomeruli. Because normal IgG does not deposit in glomeruli, studies were conducted to determine whether cationized antibodies can be prepared which deposit in glomeruli when bound to antigen but not when free in circulation. A series of cationized rabbit antiHSA was prepared with the number of added amino groups ranging from 13.3 to 60.2 per antibody molecule. Antibodies alone or in preformed soluble immune complexes, prepared at fivefold or 50-fold antigen excess, were administered to mice. With the injection of a fixed dose of 100 micrograms per mouse, antibodies alone did not deposit in glomeruli with less than 29.6 added amino groups by immunofluorescence microscopy. In contrast, 100 micrograms of antibodies with 23.5 added amino groups in immune complexes, made at fivefold antigen excess, formed immune deposits in glomeruli. With selected preparations of cationized, radiolabeled antibodies, deposition in glomeruli was quantified by isolation of mouse glomeruli. These quantitative data were in good agreement with the results of immunofluorescence microscopy. Immune complexes made at 50-fold antigen excess, containing only small-latticed immune complexes with no more than two antibody molecules per complex, deposited in glomeruli similar to antibodies alone. Selected cationized antibodies alone or in immune complexes were administered to mice in varying doses. In these experiments, glomerular deposition of immune complexes, made at fivefold antigen excess, was detected with five- to 10-fold smaller doses than the deposition of the same antibodies alone. These studies demonstrate that antibody molecules in immune complexes are more likely to deposit in glomeruli by charge-charge interactions than antibodies alone.

  1. Use of computer code for dose distribution studies in A 60CO industrial irradiator

    NASA Astrophysics Data System (ADS)

    Piña-Villalpando, G.; Sloan, D. P.

    1995-09-01

    This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).

  2. Cancer radiotherapy based on femtosecond IR laser-beam filamentation yielding ultra-high dose rates and zero entrance dose.

    PubMed

    Meesat, Ridthee; Belmouaddine, Hakim; Allard, Jean-François; Tanguay-Renaud, Catherine; Lemay, Rosalie; Brastaviceanu, Tiberius; Tremblay, Luc; Paquette, Benoit; Wagner, J Richard; Jay-Gerin, Jean-Paul; Lepage, Martin; Huels, Michael A; Houde, Daniel

    2012-09-18

    Since the invention of cancer radiotherapy, its primary goal has been to maximize lethal radiation doses to the tumor volume while keeping the dose to surrounding healthy tissues at zero. Sadly, conventional radiation sources (γ or X rays, electrons) used for decades, including multiple or modulated beams, inevitably deposit the majority of their dose in front or behind the tumor, thus damaging healthy tissue and causing secondary cancers years after treatment. Even the most recent pioneering advances in costly proton or carbon ion therapies can not completely avoid dose buildup in front of the tumor volume. Here we show that this ultimate goal of radiotherapy is yet within our reach: Using intense ultra-short infrared laser pulses we can now deposit a very large energy dose at unprecedented microscopic dose rates (up to 10(11) Gy/s) deep inside an adjustable, well-controlled macroscopic volume, without any dose deposit in front or behind the target volume. Our infrared laser pulses produce high density avalanches of low energy electrons via laser filamentation, a phenomenon that results in a spatial energy density and temporal dose rate that both exceed by orders of magnitude any values previously reported even for the most intense clinical radiotherapy systems. Moreover, we show that (i) the type of final damage and its mechanisms in aqueous media, at the molecular and biomolecular level, is comparable to that of conventional ionizing radiation, and (ii) at the tumor tissue level in an animal cancer model, the laser irradiation method shows clear therapeutic benefits.

  3. Classification of mineral deposits into types using mineralogy with a probabilistic neural network

    USGS Publications Warehouse

    Singer, Donald A.; Kouda, Ryoichi

    1997-01-01

    In order to determine whether it is desirable to quantify mineral-deposit models further, a test of the ability of a probabilistic neural network to classify deposits into types based on mineralogy was conducted. Presence or absence of ore and alteration mineralogy in well-typed deposits were used to train the network. To reduce the number of minerals considered, the analyzed data were restricted to minerals present in at least 20% of at least one deposit type. An advantage of this restriction is that single or rare occurrences of minerals did not dominate the results. Probabilistic neural networks can provide mathematically sound confidence measures based on Bayes theorem and are relatively insensitive to outliers. Founded on Parzen density estimation, they require no assumptions about distributions of random variables used for classification, even handling multimodal distributions. They train quickly and work as well as, or better than, multiple-layer feedforward networks. Tests were performed with a probabilistic neural network employing a Gaussian kernel and separate sigma weights for each class and each variable. The training set was reduced to the presence or absence of 58 reported minerals in eight deposit types. The training set included: 49 Cyprus massive sulfide deposits; 200 kuroko massive sulfide deposits; 59 Comstock epithermal vein gold districts; 17 quartzalunite epithermal gold deposits; 25 Creede epithermal gold deposits; 28 sedimentary-exhalative zinc-lead deposits; 28 Sado epithermal vein gold deposits; and 100 porphyry copper deposits. The most common training problem was the error of classifying about 27% of Cyprus-type deposits in the training set as kuroko. In independent tests with deposits not used in the training set, 88% of 224 kuroko massive sulfide deposits were classed correctly, 92% of 25 porphyry copper deposits, 78% of 9 Comstock epithermal gold-silver districts, and 83% of six quartzalunite epithermal gold deposits were classed correctly. Across all deposit types, 88% of deposits in the validation dataset were correctly classed. Misclassifications were most common if a deposit was characterized by only a few minerals, e.g., pyrite, chalcopyrite,and sphalerite. The success rate jumped to 98% correctly classed deposits when just two rock types were added. Such a high success rate of the probabilistic neural network suggests that not only should this preliminary test be expanded to include other deposit types, but that other deposit features should be added.

  4. Identification of penetration path and deposition distribution of radionuclides in houses by experiments and numerical model

    NASA Astrophysics Data System (ADS)

    Hirouchi, Jun; Takahara, Shogo; Iijima, Masashi; Watanabe, Masatoshi; Munakata, Masahiro

    2017-11-01

    In order to lift of an evacuation order in evacuation areas and return residents to their homes, human dose assessments are required. However, it is difficult to exactly assess indoor external dose rate because the indoor distribution and infiltration pathways of radionuclides are unclear. This paper describes indoor and outdoor dose rates measured in eight houses in the difficult-to-return area in Fukushima Prefecture and identifies the distribution and main infiltration pathway of radionuclides in houses. In addition, it describes dose rates calculated with a Monte Carlo photon transport code to aid a thorough understanding of the measurements. The measurements and calculations indicate that radionuclides mainly infiltrate through visible openings such as vents, windows, and doors, and then deposit near these visible openings; however, they hardly infiltrate through sockets and air conditioning outlets. The measurements on rough surfaces such as bookshelves implies that radionuclides discharged from the Fukushima-Daiichi nuclear power plant did not deposit locally on rough surfaces.

  5. Low LET proton microbeam to understand high-LET RBE by shaping spatial dose distribution

    NASA Astrophysics Data System (ADS)

    Greubel, Christoph; Ilicic, Katarina; Rösch, Thomas; Reindl, Judith; Siebenwirth, Christian; Moser, Marcus; Girst, Stefanie; Walsh, Dietrich W. M.; Schmid, Thomas E.; Dollinger, Günther

    2017-08-01

    High LET radiation, like heavy ions, are known to have a higher biological effectiveness (RBE) compared to low LET radiation, like X- or γ -rays. Theories and models attribute these higher effectiveness mostly to their extremely inhomogeneous dose deposition, which is concentrated in only a few micron sized spots. At the ion microprobe SNAKE, low LET 20 MeV protons (LET in water of 2.6 keV/μm) can be applied to cells either randomly distributed or focused to submicron spots, approximating heavy ion dose deposition. Thus, the transition between low and high LET energy deposition is experimentally accessible and the effect of different spatial dose distributions can be analysed. Here, we report on the technical setup to cultivate and irradiate 104 cells with submicron spots of low LET protons to measure cell survival in unstained cells. In addition we have taken special care to characterise the beam spot of the 20 MeV proton microbeam with fluorescent nuclear track detectors.

  6. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  7. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  8. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  9. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  10. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  11. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  12. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  13. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  14. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  15. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  16. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  17. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  19. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  20. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  1. Calculation of electron and isotopes dose point kernels with fluka Monte Carlo code for dosimetry in nuclear medicine therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botta, F.; Mairani, A.; Battistoni, G.

    Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernelmore » (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10{sup -3} MeV) and for beta emitting isotopes commonly used for therapy ({sup 89}Sr, {sup 90}Y, {sup 131}I, {sup 153}Sm, {sup 177}Lu, {sup 186}Re, and {sup 188}Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8{center_dot}R{sub CSDA} and 0.9{center_dot}R{sub CSDA} for monoenergetic electrons (R{sub CSDA} being the continuous slowing down approximation range) and within 0.8{center_dot}X{sub 90} and 0.9{center_dot}X{sub 90} for isotopes (X{sub 90} being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9{center_dot}R{sub CSDA} and 0.9{center_dot}X{sub 90} for electrons and isotopes, respectively. Results: Concerning monoenergetic electrons, within 0.8{center_dot}R{sub CSDA} (where 90%-97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9{center_dot}X{sub 90}, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution. Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less

  2. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  3. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  5. Proposed linear energy transfer areal detector for protons using radiochromic film

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayer, Rulon; Lin, Liyong; Fager, Marcus

    2015-04-15

    Radiation therapy depends on predictably and reliably delivering dose to tumors and sparing normal tissues. Protons with kinetic energy of a few hundred MeV can selectively deposit dose to deep seated tumors without an exit dose, unlike x-rays. The better dose distribution is attributed to a phenomenon known as the Bragg peak. The Bragg peak is due to relatively high energy deposition within a given distance or high Linear Energy Transfer (LET). In addition, biological response to radiation depends on the dose, dose rate, and localized energy deposition patterns or LET. At present, the LET can only be measured atmore » a given fixed point and the LET spatial distribution can only be inferred from calculations. The goal of this study is to develop and test a method to measure LET over extended areas. Traditionally, radiochromic films are used to measure dose distribution but not for LET distribution. We report the first use of these films for measuring the spatial distribution of the LET deposited by protons. The radiochromic film sensitivity diminishes for large LET. A mathematical model correlating the film sensitivity and LET is presented to justify relating LET and radiochromic film relative sensitivity. Protons were directed parallel to radiochromic film sandwiched between solid water slabs. This study proposes the scaled-normalized difference (SND) between the Treatment Planning system (TPS) and measured dose as the metric describing the LET. The SND is correlated with a Monte Carlo (MC) calculation of the LET spatial distribution for a large range of SNDs. A polynomial fit between the SND and MC LET is generated for protons having a single range of 20 cm with narrow Bragg peak. Coefficients from these fitted polynomial fits were applied to measured proton dose distributions with a variety of ranges. An identical procedure was applied to the protons deposited from Spread Out Bragg Peak and modulated by 5 cm. Gamma analysis is a method for comparing the calculated LET with the LET measured using radiochromic film at the pixel level over extended areas. Failure rates using gamma analysis are calculated for areas in the dose distribution using parameters of 25% of MC LET and 3 mm. The processed dose distributions find 5%–10% failure rates for the narrow 12.5 and 15 cm proton ranges and 10%–15% for proton ranges of 15, 17.5, and 20 cm and modulated by 5 cm. It is found through gamma analysis that the measured proton energy deposition in radiochromic film and TPS can be used to determine LET. This modified film dosimetry provides an experimental areal LET measurement that can verify MC calculations, support LET point measurements, possibly enhance biologically based proton treatment planning, and determine the polymerization process within the radiochromic film.« less

  6. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  7. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, A; Carver, D; Stabin, M

    Purpose: To validate a radiographic simulation in order to estimate patient dose due to clinically-used radiography protocols. Methods: A Monte Carlo simulation was created to simulate a radiographic x-ray beam using GEANT4. Initial validation was performed according to a portion of TG 195. Computational NURBS-based phantoms were used simulate patients of varying ages and sizes. The deposited energy in the phantom is output by the simulation. The exposure in air from a clinically used radiography unit was measured at 100 cm for various tube potentials. 10 million photons were simulated with 1 cubic centimeter of air located 100 cm frommore » the source, and the total absorbed dose was noted. The normalization factor was determined by taking a ratio of the measured dose in air to the simulated dose in air. Dose to individual voxels is calculated using the energy deposition map along with the voxelized and segmented phantom and the normalization factor. Finally, the effective dose is calculated using the ICRP methodology and tissue weighting factors. Results: This radiography simulation allows for the calculation and visualization of the energy deposition map within a voxelized phantom. The ratio of exposure, measured using an ionization chamber, to air in the simulation was determined. Since the simulation output is calibrated to match the exposure of a given clinical radiographic x-ray tube, the dose map may be visualized. This will also allow for absorbed dose estimation in specific organs or tissues as well as a whole body effective dose estimation. Conclusion: This work indicates that our Monte Carlo simulation may be used to estimate the radiation dose from clinical radiographic protocols. This will allow for an estimate of radiographic dose from various examinations without the use of traditional methods such as thermoluminescent dosimeters and body phantoms.« less

  9. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  10. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  11. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  12. Uncertainty of inhalation dose coefficients for representative physical and chemical forms of iodine-131

    NASA Astrophysics Data System (ADS)

    Harvey, Richard Paul, III

    Releases of radioactive material have occurred at various Department of Energy (DOE) weapons facilities and facilities associated with the nuclear fuel cycle in the generation of electricity. Many different radionuclides have been released to the environment with resulting exposure of the population to these various sources of radioactivity. Radioiodine has been released from a number of these facilities and is a potential public health concern due to its physical and biological characteristics. Iodine exists as various isotopes, but our focus is on 131I due to its relatively long half-life, its prevalence in atmospheric releases and its contribution to offsite dose. The assumption of physical and chemical form is speculated to have a profound impact on the deposition of radioactive material within the respiratory tract. In the case of iodine, it has been shown that more than one type of physical and chemical form may be released to, or exist in, the environment; iodine can exist as a particle or as a gas. The gaseous species can be further segregated based on chemical form: elemental, inorganic, and organic iodides. Chemical compounds in each class are assumed to behave similarly with respect to biochemistry. Studies at Oak Ridge National Laboratories have demonstrated that 131I is released as a particulate, as well as in elemental, inorganic and organic chemical form. The internal dose estimate from 131I may be very different depending on the effect that chemical form has on fractional deposition, gas uptake, and clearance in the respiratory tract. There are many sources of uncertainty in the estimation of environmental dose including source term, airborne transport of radionuclides, and internal dosimetry. Knowledge of uncertainty in internal dosimetry is essential for estimating dose to members of the public and for determining total uncertainty in dose estimation. Important calculational steps in any lung model is regional estimation of deposition fractions and gas uptake of radionuclides in various regions of the lung. Variability in regional radionuclide deposition within lung compartments may significantly contribute to the overall uncertainty of the lung model. The uncertainty of lung deposition and biological clearance is dependent upon physiological and anatomical parameters of individuals as well as characteristic parameters of the particulate material. These parameters introduce uncertainty into internal dose estimates due to their inherent variability. Anatomical and physiological input parameters are age and gender dependent. This work has determined the uncertainty in internal dose estimates and the sensitive parameters involved in modeling particulate deposition and gas uptake of different physical and chemical forms of 131I with age and gender dependencies.

  13. graphkernels: R and Python packages for graph comparison

    PubMed Central

    Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-01-01

    Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902

  14. Aflatoxin variability in pistachios.

    PubMed Central

    Mahoney, N E; Rodriguez, S B

    1996-01-01

    Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781

  15. graphkernels: R and Python packages for graph comparison.

    PubMed

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  16. Cerbera odollam toxicity: A review.

    PubMed

    Menezes, Ritesh G; Usman, Muhammad Shariq; Hussain, Syed Ather; Madadin, Mohammed; Siddiqi, Tariq Jamal; Fatima, Huda; Ram, Pradhum; Pasha, Syed Bilal; Senthilkumaran, S; Fatima, Tooba Qadir; Luis, Sushil Allen

    2018-05-09

    Cerbera odollam is a plant species of the Apocynaceae family. It is often dubbed the 'suicide tree' due to its strong cardiotoxic effects, which make it a suitable means to attempt suicide. The plant grows in wet areas in South India, Madagascar, and Southeast Asia; and its common names include Pong-Pong and Othalanga. The poison rich part of the plant is the kernel which is present at the core of its fruit. The bioactive toxin in the plant is cerberin, which is a cardiac glycoside of the cardenolide class. Cerberin has a mechanism of action similar to digoxin; hence, Cerbera odollam toxicity manifests similar to acute digoxin poisoning. Ingestion of its kernel causes nausea, vomiting, hyperkalemia, thrombocytopenia, and ECG abnormalities. Exposure to high doses of Cerbera odollam carries the highest risk of mortality. Initial management includes supportive therapy and administration of atropine followed by temporary pacemaker insertion. Administration of digoxin immune Fab may be considered in severe cases, although efficacy is variable and data limited to isolated case reports. Copyright © 2018 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  17. Effective biological dose from occupational exposure during nanoparticle synthesis

    NASA Astrophysics Data System (ADS)

    Demou, Evangelia; Tran, Lang; Housiadas, Christos

    2009-02-01

    Nanomaterial and nanotechnology safety require the characterization of occupational exposure levels for completing a risk assessment. However, equally important is the estimation of the effective internal dose via lung deposition, transport and clearance mechanisms. An integrated source-to-biological dose assessment study is presented using real monitoring data collected during nanoparticle synthesis. Experimental monitoring data of airborne exposure levels during nanoparticle synthesis of CaSO4 and BiPO4 nanoparticles in a research laboratory is coupled with a human lung transport and deposition model, which solves in an Eulerian framework the general dynamic equation for polydisperse aerosols using particle specific physical-chemical properties. Subsequently, the lung deposition model is coupled with a mathematical particle clearance model providing the effective biological dose as well as the time course of the biological dose build-up after exposure. The results for the example of BiPO4 demonstrate that even short exposures throughout the day can lead to particle doses of 1.10·E+08#/(kg-bw·8h-shift), with the majority accumulating in the pulmonary region. Clearance of particles is slow and is not completed within a working shift following a 1 hour exposure. It mostly occurs via macrophage activity in the alveolar region, with small amounts transported to the interstitium and less to the lymph nodes.

  18. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Development of an Aerosol Surface Inoculation Method for Bacillus Spores ▿

    PubMed Central

    Lee, Sang Don; Ryan, Shawn P.; Snyder, Emily Gibb

    2011-01-01

    A method was developed to deposit Bacillus subtilis spores via aerosolization onto various surface materials for biological agent decontamination and detection studies. This new method uses an apparatus coupled with a metered dose inhaler to reproducibly deposit spores onto various surfaces. A metered dose inhaler was loaded with Bacillus subtilis spores, a surrogate for Bacillus anthracis. Five different material surfaces (aluminum, galvanized steel, wood, carpet, and painted wallboard paper) were tested using this spore deposition method. This aerosolization method deposited spores at a concentration of more than 107 CFU per coupon (18-mm diameter) with less than a 50% coefficient of variation, showing that the aerosolization method developed in this study can deposit reproducible numbers of spores onto various surface coupons. Scanning electron microscopy was used to probe the spore deposition patterns on test coupons. The deposition patterns observed following aerosol impaction were compared to those of liquid inoculation. A physical difference in the spore deposition patterns was observed to result from the two different methods. The spore deposition method developed in this study will help prepare spore coupons via aerosolization fast and reproducibly for bench top decontamination and detection studies. PMID:21193670

  20. Development of an aerosol surface inoculation method for bacillus spores.

    PubMed

    Lee, Sang Don; Ryan, Shawn P; Snyder, Emily Gibb

    2011-03-01

    A method was developed to deposit Bacillus subtilis spores via aerosolization onto various surface materials for biological agent decontamination and detection studies. This new method uses an apparatus coupled with a metered dose inhaler to reproducibly deposit spores onto various surfaces. A metered dose inhaler was loaded with Bacillus subtilis spores, a surrogate for Bacillus anthracis. Five different material surfaces (aluminum, galvanized steel, wood, carpet, and painted wallboard paper) were tested using this spore deposition method. This aerosolization method deposited spores at a concentration of more than 10(7) CFU per coupon (18-mm diameter) with less than a 50% coefficient of variation, showing that the aerosolization method developed in this study can deposit reproducible numbers of spores onto various surface coupons. Scanning electron microscopy was used to probe the spore deposition patterns on test coupons. The deposition patterns observed following aerosol impaction were compared to those of liquid inoculation. A physical difference in the spore deposition patterns was observed to result from the two different methods. The spore deposition method developed in this study will help prepare spore coupons via aerosolization fast and reproducibly for bench top decontamination and detection studies.

  1. Deposition of conductive TiN shells on SiO2 nanoparticles with a fluidized bed ALD reactor

    NASA Astrophysics Data System (ADS)

    Didden, Arjen; Hillebrand, Philipp; Wollgarten, Markus; Dam, Bernard; van de Krol, Roel

    2016-02-01

    Conductive TiN shells have been deposited on SiO2 nanoparticles (10-20 nm primary particle size) with fluidized bed atomic layer deposition using TDMAT and NH3 as precursors. Analysis of the powders confirms that shell growth saturates at approximately 0.4 nm/cycle at TDMAT doses of >1.2 mmol/g of powder. TEM and XPS analysis showed that all particles were coated with homogeneous shells containing titanium. Due to the large specific surface area of the nanoparticles, the TiN shells rapidly oxidize upon exposure to air. Electrical measurements show that the partially oxidized shells are conducting, with apparent resistivity of approximately 11 kΩ cm. The resistivity of the powders is strongly influenced by the NH3 dose, with a smaller dose giving an order-of-magnitude higher resistivity.

  2. Infrared-Assisted Extraction and HPLC-Analysis of Prunus armeniaca L. Pomace and Detoxified-Kernel and their Antidiabetic Effects.

    PubMed

    Raafat, Karim; El-Darra, Nada; Saleh, Fatima A; Rajha, Hiba N; Maroun, Richard G; Louka, Nicolas

    2018-03-01

    Prunus armeniaca L. (P. armeniaca) is one of the medicinal plants with a high safety-profile. The aim of this work was to make an infrared-assisted extraction (IR-AE) of P. armeniaca fruit (pomace) and kernel, and analyse them using reverse phase high-performance liquid chromatography (RP-HPLC) aided method. IR-AE is a novel-technique aimed at increasing the extraction-efficiency. The antidiabetic-potentials of the P. armeniaca pomace (AP) and the detoxified P. armeniaca kernel extract (DKAP) were monitored exploring their possible hypoglycemic-mechanisms. Acute (6 h), subchronic (8 days) and long-term (8 weeks) assessment of Diabetes mellitus (DM) using glucometers and glycated hemoglobin (HbA1c) methods were applied. Serum-insulin levels, the inhibitory effects on alpha-glucosidase, serum-catalase (CAT) and lipid peroxidation (LPO) levels were also monitored. AP was shown to be rich in polyphenolics like trans-lutein (14.1%), trans-zeaxanthin (10.5%), trans-ß-cryptoxanthin (11.6%), 13, cis-ß-carotene (6.5%), trans 9, cis-ß-carotene (18.4%), and ß-carotene (21.5%). Prunus armeniaca kernel extract before detoxification (KAP) was found to be rich in amygdaline (16.1%), which caused a high mortality rate (50.1%), while after detoxification (amygdaline, 1.4%) a lower mortality rate (9.1%) was found. AP showed significant (p ≤ 0.05, n = 7/group) antidiabetic-activity more prominent than DKAP acutely, subchronically and on longer-terms. IR-AEs displayed more efficient acute and subchronic blood glucose level (BGL) reduction than a conventional extraction method, which might be attributed to IR-AE superiority in extraction of active ingredients. AP showed more-significant and dose-dependent increase in serum-insulin, CAT-levels and body-weights more prominent than those of DKAP. Alpha-glucosidase and LPO levels were inhibited with AP-groups more-significantly. In comparison to conventional-methods, IR-AE appeared to be an efficient and time-conserving novel extraction method. The antidiabetic-potentials of pomace and detoxified-kernels of P. armeniaca were probably mediated via the attenuation of glucose-provoked oxidative-stress, the inhibition of alpha-glucosidase and the marked insulin-secretagogue effect. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. SU-E-T-769: T-Test Based Prior Error Estimate and Stopping Criterion for Monte Carlo Dose Calculation in Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Schuemann, J

    2015-06-15

    Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  4. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  5. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...

  6. Silica nanoparticles are less toxic to human lung cells when deposited at the air–liquid interface compared to conventional submerged exposure

    PubMed Central

    Saathoff, Harald; Leisner, Thomas; Al-Rawi, Marco; Simon, Michael; Seemann, Gunnar; Dössel, Olaf; Mülhopt, Sonja; Paur, Hanns-Rudolf; Fritsch-Decker, Susanne

    2014-01-01

    Summary Background: Investigations on adverse biological effects of nanoparticles (NPs) in the lung by in vitro studies are usually performed under submerged conditions where NPs are suspended in cell culture media. However, the behaviour of nanoparticles such as agglomeration and sedimentation in such complex suspensions is difficult to control and hence the deposited cellular dose often remains unknown. Moreover, the cellular responses to NPs under submerged culture conditions might differ from those observed at physiological settings at the air–liquid interface. Results: In order to avoid problems because of an altered behaviour of the nanoparticles in cell culture medium and to mimic a more realistic situation relevant for inhalation, human A549 lung epithelial cells were exposed to aerosols at the air–liquid interphase (ALI) by using the ALI deposition apparatus (ALIDA). The application of an electrostatic field allowed for particle deposition efficiencies that were higher by a factor of more than 20 compared to the unmodified VITROCELL deposition system. We studied two different amorphous silica nanoparticles (particles produced by flame synthesis and particles produced in suspension by the Stöber method). Aerosols with well-defined particle sizes and concentrations were generated by using a commercial electrospray generator or an atomizer. Only the electrospray method allowed for the generation of an aerosol containing monodisperse NPs. However, the deposited mass and surface dose of the particles was too low to induce cellular responses. Therefore, we generated the aerosol with an atomizer which supplied agglomerates and thus allowed a particle deposition with a three orders of magnitude higher mass and of surface doses on lung cells that induced significant biological effects. The deposited dose was estimated and independently validated by measurements using either transmission electron microscopy or, in case of labelled NPs, by fluorescence analyses. Surprisingly, cells exposed at the ALI were less sensitive to silica NPs as evidenced by reduced cytotoxicity and inflammatory responses. Conclusion: Amorphous silica NPs induced qualitatively similar cellular responses under submerged conditions and at the ALI. However, submerged exposure to NPs triggers stronger effects at much lower cellular doses. Hence, more studies are warranted to decipher whether cells at the ALI are in general less vulnerable to NPs or specific NPs show different activities dependent on the exposure method. PMID:25247141

  7. Toward a web-based real-time radiation treatment planning system in a cloud computing environment.

    PubMed

    Na, Yong Hum; Suh, Tae-Suk; Kapp, Daniel S; Xing, Lei

    2013-09-21

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an 'on-demand' basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture's constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm(2)) from the Varian TrueBeam(TM) STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.

  8. Toward a web-based real-time radiation treatment planning system in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Hum Na, Yong; Suh, Tae-Suk; Kapp, Daniel S.; Xing, Lei

    2013-09-01

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an ‘on-demand’ basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture’s constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm2) from the Varian TrueBeamTM STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.

  9. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  10. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  11. 7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...

  12. Environmental consequences of postulate plutonium releases from Atomics International's Nuclear Materials Development Facility (NMDF), Santa Susana, California, as a result of severe natural phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamison, J.D.; Watson, E.C.

    1982-02-01

    Potential environmental consequences in terms of radiation dose to people are presented for postulated plutonium releases caused by severe natural phenomena at the Atomics International's Nuclear Materials Development Facility (NMDF), in the Santa Susana site, California. The severe natural phenomena considered are earthquakes, tornadoes, and high straight-line winds. Plutonium deposition values are given for significant locations around the site. All important potential exposure pathways are examined. The most likely 50-year committed dose equivalents are given for the maximum-exposed individual and the population within a 50-mile radius of the plant. The maximum plutonium deposition values likely to occur offsite are alsomore » given. The most likely calculated 50-year collective committed dose equivalents are all much lower than the collective dose equivalent expected from 50 years of exposure to natural background radiation and medical x-rays. The most likely maximum residual plutonium contamination estimated to be deposited offsite following the earthquake, and the 150-mph and 170-mph tornadoes are above the Environmental Protection Agency's (EPA) proposed guideline for plutonium in the general environment of 0.2 ..mu..Ci/m/sup 2/. The deposition values following the 110-mph and the 130-mph tornadoes are below the EPA proposed guideline.« less

  13. Environmental consequences of postulated plutonium releases from General Electric Company Vallecitos Nuclear Center, Vallecitos, California, as a result of severe natural phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamison, J.D.; Watson, E.C.

    1980-11-01

    Potential environmental consequences in terms of radiation dose to people are presented for postulated plutonium releases caused by severe natural phenomena at the General Electric Company Vallecitos Nuclear Center, Vallecitos, California. The severe natural phenomena considered are earthquakes, tornadoes, and high straight-line winds. Maximum plutonium deposition values are given for significant locations around the site. All important potential exposure pathways are examined. The most likely 50-year committed dose equivalents are given for the maximum-exposed individual and the population within a 50-mile radius of the plant. The maximum plutonium deposition values likely to occur offsite are also given. The most likelymore » calculated 50-year collective committed dose equivalents are all much lower than the collective dose equivalent expected from 50 years of exposure to natural background radiation and medical x-rays. The most likely maximum residual plutonium contamination estimated to be deposited offsite following the earthquakes, and the 180-mph and 230-mph tornadoes are above the Environmental Protection Agency's (EPA) proposed guideline for plutonium in the general environment of 0.2 ..mu..Ci/m/sup 2/. The deposition values following the 135-mph tornado are below the EPA proposed guidelines.« less

  14. Nasal deposition of ciclesonide nasal aerosol and mometasone aqueous nasal spray in allergic rhinitis patients.

    PubMed

    Emanuel, Ivor A; Blaiss, Michael S; Meltzer, Eli O; Evans, Philip; Connor, Alyson

    2014-01-01

    Sensory attributes of intranasal corticosteroids, such as rundown to the back of the throat, may influence patient treatment preferences. This study compares the nasal deposition and nasal retention of a radiolabeled solution of ciclesonide nasal aerosol (CIC-hydrofluoroalkane [HFA]) with a radiolabeled suspension of mometasone furoate monohydrate aqueous nasal spray (MFNS) in subjects with either perennial allergic rhinitis (AR) or seasonal AR. In this open-label, single-dose, randomized, crossover scintigraphy study, 14 subjects with symptomatic AR received a single dose of radiolabeled 74-μg CIC-HFA (37 μg/spray, 1 spray/each nostril) via a nasal metered-dose inhaler or a single dose of radiolabeled 200-μg MFNS (50 μg/spray, 2 sprays/each nostril), with a minimum 5-day washout period between treatments. Initial deposition (2 minutes postdose) of radiolabeled CIC-HFA and MFNS in the nasal cavity, nasopharynx, and on nasal wipes, and retention of radioactivity in the nasal cavity and nasal run-out on nasal wipes at 2, 4, 6, 8, and 10 minutes postdose were quantified with scintigraphy. At 2 and 10 minutes postdose, deposition of radiolabeled CIC-HFA was significantly higher in the nasal cavity versus radiolabeled MFNS (99.42% versus 86.50% at 2 minutes, p = 0.0046; and 81.10% versus 54.31% at 10 minutes, p < 0.0001, respectively; p values unadjusted for multiplicity). Deposition of radioactivity on nasal wipes was significantly higher with MFNS versus CIC-HFA at all five time points, and posterior losses of radiolabeled formulation were significantly higher with MFNS at 6, 8, and 10 minutes postdose. In this scintigraphic study, significantly higher nasal deposition and retention of radiolabeled aerosol CIC-HFA were observed versus radiolabeled aqueous MFNS in subjects with AR.

  15. Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.

    2017-03-01

    Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.

  16. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  17. Fine and ultrafine particle doses in the respiratory tract from digital printing operations.

    PubMed

    Voliotis, Aristeidis; Karali, Irene; Kouras, Athanasios; Samara, Constantini

    2017-01-01

    In this study, we report for the first time particle number doses in different parts of the human respiratory tract and real-time deposition rates for particles in the 10 nm to 10 μm size range emitted by digital printing operations. Particle number concentrations (PNCs) and size distribution were measured in a typical small-sized printing house using a NanoScan scanning mobility particle sizer and an optical particle sizer. Particle doses in human lung were estimated applying a multiple-path particle dosimetry model under two different breathing scenarios. PNC was dominated by the ultrafine particle fractions (UFPs, i.e., particles smaller than 100 nm) exhibiting almost nine times higher levels in comparison to the background values. The average deposition rate fοr each scenario in the whole lung was estimated at 2.0 and 2.9 × 10 7 particles min -1 , while the respective highest particle dose in the tracheobronchial tree (2.0 and 2.9 × 10 9 particles) was found for diameter of 50 nm. The majority of particles appeared to deposit in the acinar region and most of them were in the UFP size range. For both scenarios, the maximum deposition density (9.5 × 10 7 and 1.5 × 10 8 particles cm -2 ) was observed at the lobar bronchi. Overall, the differences in the estimated particle doses between the two scenarios were 30-40% for both size ranges.

  18. Low-dose patterning of platinum nanoclusters on carbon nanotubes by focused-electron-beam-induced deposition as studied by TEM

    PubMed Central

    Bittencourt, Carla; Bals, Sara; Van Tendeloo, Gustaaf

    2013-01-01

    Summary Focused-electron-beam-induced deposition (FEBID) is used as a direct-write approach to decorate ultrasmall Pt nanoclusters on carbon nanotubes at selected sites in a straightforward maskless manner. The as-deposited nanostructures are studied by transmission electron microscopy (TEM) in 2D and 3D, demonstrating that the Pt nanoclusters are well-dispersed, covering the selected areas of the CNT surface completely. The ability of FEBID to graft nanoclusters on multiple sides, through an electron-transparent target within one step, is unique as a physical deposition method. Using high-resolution TEM we have shown that the CNT structure can be well preserved thanks to the low dose used in FEBID. By tuning the electron-beam parameters, the density and distribution of the nanoclusters can be controlled. The purity of as-deposited nanoclusters can be improved by low-energy electron irradiation at room temperature. PMID:23399584

  19. Antibody localization in the glomerular basement membrane may precede in situ immune deposit formation in rat glomeruli.

    PubMed

    Agodoa, L Y; Gauthier, V J; Mannik, M

    1985-02-01

    The administration of cationized antibodies, specific to human serum albumin, into the renal artery of rats caused transient presence of IgG in glomeruli by immunofluorescence microscopy. Intravenous infusion of appropriate doses of antigen after the injection of cationized antibodies resulted in immune deposit formation in glomeruli that persisted through 96 hr. By electron microscopy, these deposits were located in the subepithelial area. The injection of large doses of antigen produced immune deposits which were present in glomeruli for only a few hours, presumably due to formation of only small-latticed immune complexes. The presented data indicate that cationic antibodies bound to the fixed negative charges of the glomerular basement membrane can interact with circulating antigen to form immune deposits in glomeruli. This mechanism may be important because anionic antigens have been shown to induce the synthesis of cationic antibodies.

  20. Credit scoring analysis using kernel discriminant

    NASA Astrophysics Data System (ADS)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  1. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  2. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy

    NASA Astrophysics Data System (ADS)

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-01

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  3. Postlumpectomy Focal Brachytherapy for Simultaneous Treatment of Surgical Cavity and Draining Lymph Nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hrycushko, Brian A.; Li Shihong; Shi Chengyu

    2011-03-01

    Purpose: The primary objective was to investigate a novel focal brachytherapy technique using lipid nanoparticle (liposome)-carried {beta}-emitting radionuclides (rhenium-186 [{sup 186}Re]/rhenium-188 [{sup 188}Re]) to simultaneously treat the postlumpectomy surgical cavity and draining lymph nodes. Methods and Materials: Cumulative activity distributions in the lumpectomy cavity and lymph nodes were extrapolated from small animal imaging and human lymphoscintigraphy data. Absorbed dose calculations were performed for lumpectomy cavities with spherical and ellipsoidal shapes and lymph nodes within human subjects by use of the dose point kernel convolution method. Results: Dose calculations showed that therapeutic dose levels within the lumpectomy cavity wall can covermore » 2- and 5-mm depths for {sup 186}Re and {sup 188}Re liposomes, respectively. The absorbed doses at 1 cm sharply decreased to only 1.3% to 3.7% of the doses at 2 mm for {sup 186}Re liposomes and 5 mm for {sup 188}Re liposomes. Concurrently, the draining sentinel lymph nodes would receive a high focal therapeutic absorbed dose, whereas the average dose to 1 cm of surrounding tissue received less than 1% of that within the nodes. Conclusions: Focal brachytherapy by use of {sup 186}Re/{sup 188}Re liposomes was theoretically shown to be capable of simultaneously treating the lumpectomy cavity wall and draining sentinel lymph nodes with high absorbed doses while significantly lowering dose to surrounding healthy tissue. In turn, this allows for dose escalation to regions of higher probability of containing residual tumor cells after lumpectomy while reducing normal tissue complications.« less

  4. Correlation and classification of single kernel fluorescence hyperspectral data with aflatoxin concentration in corn kernels inoculated with Aspergillus flavus spores.

    PubMed

    Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D

    2010-05-01

    The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to 0.91 when a threshold of either 20 or 100 ng g(-1) was used. Overall, the results indicate that fluorescence hyperspectral imaging may be applicable in estimating aflatoxin content in individual corn kernels.

  5. Data Compilation for AGR-3/4 Designed-to-Fail (DTF) Fuel Particle Batch LEU04-02DTF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunn, John D; Miller, James Henry

    2008-10-01

    This document is a compilation of coating and characterization data for the AGR-3/4 designed-to-fail (DTF) particles. The DTF coating is a high density, high anisotropy pyrocarbon coating of nominal 20 {micro}m thickness that is deposited directly on the kernel. The purpose of this coating is to fail early in the irradiation, resulting in a controlled release of fission products which can be analyzed to provide data on fission product transport. A small number of DTF particles will be included with standard TRISO driver fuel particles in the AGR-3 and AGR-4 compacts. The ORNL Coated Particle Fuel Development Laboratory 50-mm diametermore » fluidized bed coater was used to coat the DTF particles. The coatings were produced using procedures and process parameters that were developed in an earlier phase of the project as documented in 'Summary Report on the Development of Procedures for the Fabrication of AGR-3/4 Design-to-Fail Particles', ORNL/TM-2008/161. Two coating runs were conducted using the approved coating parameters. NUCO425-06DTF was a final process qualification batch using natural enrichment uranium carbide/uranium oxide (UCO) kernels. After the qualification run, LEU04-02DTF was produced using low enriched UCO kernels. Both runs were inspected and determined to meet the specifications for DTF particles in section 5 of the AGR-3 & 4 Fuel Product Specification (EDF-6638, Rev.1). Table 1 provides a summary of key properties of the DTF layer. For comparison purposes, an archive sample of DTF particles produced by General Atomics was characterized using identical methods. This data is also summarized in Table 1.« less

  6. Multi-thermal observations of the 2010 October 16 flare:heating of a ribbon via loops, or a blast wave?

    NASA Astrophysics Data System (ADS)

    Christe, Steven; Inglis, A.; Aschwanden, M.; Dennis, B.

    2011-05-01

    On 2010 October 16th SDO/AIA observed its first flare using automatic exposure control. Coincidentally, this flare also exhibited a large number of interesting features. Firstly, a large ribbon significantly to the solar west of the flare kernel was ignited and was visible in all AIA wavelengths, posing the question as to how this energy was deposited and how it relates to the main flare site. A faint blast wave also emanates from the flare kernel, visible in AIA and observed traveling to the solar west at an estimated speed of 1000 km/s. This blast wave is associated with a weak white-light CME observed with STEREO B and a Type II radio burst observed from Green Bank Observatory (GBSRBS). One possibility is that this blast wave is responsible for the heating of the ribbon. However, closer scrutiny reveals that the flare site and the ribbon are in fact connected magnetically via coronal loops which are heated during the main energy release. These loops are distinct from the expected hot, post-flare loops present within the main flare kernel. RHESSI spectra indicate that these loops are heated to approximately 10 MK in the immediate flare aftermath. Using the multi-temperature capabilities of AIA in combination with RHESSI, and by employing the cross-correlation mapping technique, we are able to measure the loop temperatures as a function of time over several post-flare hours and hence measure the loop cooling rate. We find that the time delay between the appearance of loops in the hottest channel, 131 A, and the cool 171 A channel, is 70 minutes. Yet the causality of this event remains unclear. Is the ribbon heated via these interconnected loops or via a blast wave?

  7. Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach

    NASA Astrophysics Data System (ADS)

    Kotaru, Appala Raju; Joshi, Ramesh C.

    Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.

  8. Intraear Compensation of Field Corn, Zea mays, from Simulated and Naturally Occurring Injury by Ear-Feeding Larvae.

    PubMed

    Steckel, S; Stewart, S D

    2015-06-01

    Ear-feeding larvae, such as corn earworm, Helicoverpa zea Boddie (Lepidoptera: Noctuidae), can be important insect pests of field corn, Zea mays L., by feeding on kernels. Recently introduced, stacked Bacillus thuringiensis (Bt) traits provide improved protection from ear-feeding larvae. Thus, our objective was to evaluate how injury to kernels in the ear tip might affect yield when this injury was inflicted at the blister and milk stages. In 2010, simulated corn earworm injury reduced total kernel weight (i.e., yield) at both the blister and milk stage. In 2011, injury to ear tips at the milk stage affected total kernel weight. No differences in total kernel weight were found in 2013, regardless of when or how much injury was inflicted. Our data suggested that kernels within the same ear could compensate for injury to ear tips by increasing in size, but this increase was not always statistically significant or sufficient to overcome high levels of kernel injury. For naturally occurring injury observed on multiple corn hybrids during 2011 and 2012, our analyses showed either no or a minimal relationship between number of kernels injured by ear-feeding larvae and the total number of kernels per ear, total kernel weight, or the size of individual kernels. The results indicate that intraear compensation for kernel injury to ear tips can occur under at least some conditions. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  10. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  11. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  12. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  13. Bioavailability of cyanide after consumption of a single meal of foods containing high levels of cyanogenic glycosides: a crossover study in humans.

    PubMed

    Abraham, Klaus; Buhrke, Thorsten; Lampen, Alfonso

    2016-03-01

    The acute toxicity of cyanide is determined by its peak levels reached in the body. Compared to the ingestion of free cyanide, lower peak levels may be expected after consumption of foods containing cyanogenic glycosides with the same equivalent dose of cyanide. This is due to possible delayed and/or incomplete release of cyanide from the cyanogenic glycosides depending on many factors. Data on bioavailability of cyanide after consumption of foods containing high levels of cyanogenic glycosides as presented herein were necessary to allow a meaningful risk assessment for these foods. A crossover study was carried out in 12 healthy adults who consumed persipan paste (equivalent total cyanide: 68 mg/kg), linseed (220 mg/kg), bitter apricot kernels (about 3250 mg/kg), and fresh cassava roots (76-150 mg/kg), with each "meal" containing equivalents of 6.8 mg cyanide. Cyanide levels were determined in whole blood using a GC-MS method with K(13)C(15)N as internal standard. Mean levels of cyanide at the different time points were highest after consumption of cassava (15.4 µM, after 37.5 min) and bitter apricot kernels (14.3 µM, after 20 min), followed by linseed (5.7 µM, after 40 min) and 100 g persipan (1.3 µM, after 105 min). The double dose of 13.6 mg cyanide eaten with 200 g persipan paste resulted in a mean peak level of 2.9 µM (after 150 min). An acute reference dose of 0.075 mg/kg body weight was derived being valid for a single application/meal of cyanides or hydrocyanic acid as well as of unprocessed foods with cyanogenic glycosides also containing the accompanying intact β-glucosidase. For some of these foods, this approach may be overly conservative due to delayed release of cyanide, as demonstrated for linseed. In case of missing or inactivated β-glucosidase, the hazard potential is much lower.

  14. Assessment of background gamma radiation levels using airborne gamma ray spectrometer data over uranium deposits, Cuddapah Basin, India - A comparative study of dose rates estimated by AGRS and PGRS.

    PubMed

    Srinivas, D; Ramesh Babu, V; Patra, I; Tripathi, Shailesh; Ramayya, M S; Chaturvedi, A K

    2017-02-01

    The Atomic Minerals Directorate for Exploration and Research (AMD) has conducted high-resolution airborne gamma ray spectrometer (AGRS), magnetometer and time domain electromagnetic (TDEM) surveys for uranium exploration, along the northern margins of Cuddapah Basin. The survey area includes well known uranium deposits such as Lambapur-Peddagattu, Chitrial and Koppunuru. The AGRS data collected for uranium exploration is utilised for estimating the average absorbed rates in air due to radio-elemental (potassium in %, uranium and thorium in ppm) distribution over these known deposit areas. Further, portable gamma ray spectrometer (PGRS) was used to acquire data over two nearby locations one from Lambapur deposit, and the other from known anomalous zone and subsequently average gamma dose rates were estimated. Representative in-situ rock samples were also collected from these two areas and subjected to radio-elemental concentration analysis by gamma ray spectrometer (GRS) in the laboratory and then dose rates were estimated. Analyses of these three sets of results complement one another, thereby providing a comprehensive picture of the radiation environment over these deposits. The average absorbed area wise dose rate level is estimated to be 130 ± 47 nGy h -1 in Lambapur-Peddagattu, 186 ± 77 nGy h -1 in Chitrial and 63 ± 22 nGy h -1 in Koppunuru. The obtained average dose levels are found to be higher than the world average value of 54 nGy h -1 . The gamma absorbed dose rates in nGy h -1 were converted to annual effective dose rates in mSv y -1 as proposed by the United Nations Scientific Committee on the Effect of Atomic Radiation (UNSCEAR). The annual average effective dose rates for the entire surveyed area is 0.12 mSv y -1 , which is much lower than the recommended limit of 1 mSv y -1 by International Commission on Radiation protection (ICRP). It may be ascertained here that the present study establishes a reference data set (baseline) in these areas to assess any changes in gamma radiation levels due to mining and milling activities in future. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Quantitative evaluation of local pulmonary distribution of TiO2 in rats following single or multiple intratracheal administrations of TiO2 nanoparticles using X-ray fluorescence microscopy.

    PubMed

    Zhang, Guihua; Shinohara, Naohide; Kano, Hirokazu; Senoh, Hideki; Suzuki, Masaaki; Sasaki, Takeshi; Fukushima, Shoji; Gamo, Masashi

    2016-10-01

    Uneven pulmonary nanoparticle (NP) distribution has been described when using single-dose intratracheal administration tests. Multiple-dose intratracheal administrations with small quantities of NPs are expected to improve the unevenness of each dose. The differences in local pulmonary NP distribution (called microdistribution) between single- and multiple-dose administrations may cause differential pulmonary responses; however, this has not been evaluated. Here, we quantitatively evaluated the pulmonary microdistribution (per mesh: 100 μm × 100 μm) of TiO2 in lung sections from rats following one, two, three, or four doses of TiO2 NPs at a same total dosage of 10 mg kg(-1) using X-ray fluorescence microscopy. The results indicate that: (i) multiple-dose administrations show lower variations in TiO2 content (ng mesh(-1) ) for sections of each lobe; (ii) TiO2 appears to be deposited more in the right caudal and accessory lobes located downstream of the administration direction of NP suspensions, and less so in the right middle lobes, irrespective of the number of doses; (iii) there are not prominent differences in the pattern of pulmonary TiO2 microdistribution between rats following single and multiple doses of TiO2 NPs. Additionally, the estimation of pulmonary TiO2 deposition for multiple-dose administrations imply that every dose of TiO2 would be randomly deposited only in part of the fixed 30-50% of lung areas. The evidence suggests that multiple-dose administrations do not offer remarkable advantages over single-dose administration on the pulmonary NP microdistribution, although multiple-dose administrations may reduce variations in the TiO2 content for each lung lobe. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  17. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  18. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  19. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  20. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  1. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  2. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    PubMed

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the kernel fraction was redried at 60°C for 48 h in a forced-air oven and dry sieved to determine GMPS and surface area. Linear relationships between CSPS from WPCS (n=80) and kernel fraction GMPS, surface area, and proportion passing through the 4.75-mm screen were poor. Strong quadratic relationships between proportion of kernel fraction passing through the 4.75-mm screen and kernel fraction GMPS and surface area were observed. These findings suggest that hydrodynamic separation and dry sieving of the kernel fraction may provide a better assessment of kernel breakage in WPCS than CSPS. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Energy deposition evaluation for ultra-low energy electron beam irradiation systems using calibrated thin radiochromic film and Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsui, S., E-mail: smatsui@gpi.ac.jp; Mori, Y.; Nonaka, T.

    2016-05-15

    For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films andmore » Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.« less

  4. Nanoscale dose deposition in cell structures under X-ray irradiation treatment assisted with nanoparticles: An analytical approach to the relative biological effectiveness.

    PubMed

    Melo-Bernal, W; Chernov, V; Chernov, G; Barboza-Flores, M

    2018-08-01

    In this study, an analytical model for the assessment of the modification of cell culture survival under ionizing radiation assisted with nanoparticles (NPs) is presented. The model starts from the radial dose deposition around a single NP, which is used to describe the dose deposition in a cell structure with embedded NPs and, in turn, to evaluate the number of lesions formed by ionizing radiation. The model is applied to the calculation of relative biological effectiveness values for cells exposed to 0.5mg/g of uniformly dispersed NPs with a radius of 10nm made of Fe, I, Gd, Hf, Pt and Au and irradiated with X-rays of energies 20keV higher than the element K-shell binding energy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Energy deposition evaluation for ultra-low energy electron beam irradiation systems using calibrated thin radiochromic film and Monte Carlo simulations.

    PubMed

    Matsui, S; Mori, Y; Nonaka, T; Hattori, T; Kasamatsu, Y; Haraguchi, D; Watanabe, Y; Uchiyama, K; Ishikawa, M

    2016-05-01

    For evaluation of on-site dosimetry and process design in industrial use of ultra-low energy electron beam (ULEB) processes, we evaluate the energy deposition using a thin radiochromic film and a Monte Carlo simulation. The response of film dosimeter was calibrated using a high energy electron beam with an acceleration voltage of 2 MV and alanine dosimeters with uncertainty of 11% at coverage factor 2. Using this response function, the results of absorbed dose measurements for ULEB were evaluated from 10 kGy to 100 kGy as a relative dose. The deviation between the responses of deposit energy on the films and Monte Carlo simulations was within 15%. As far as this limitation, relative dose estimation using thin film dosimeters with response function obtained by high energy electron irradiation and simulation results is effective for ULEB irradiation processes management.

  6. Evaluation of human body irradiation caused by radionuclides deposited in the filtration unit of gas mask.

    PubMed

    Cerny, R; Johnova, K; Otahal, P; Thinova, L; Kluson, J

    2017-12-01

    Radioactive aerosol particles represent a serious risk for people facing the consequences of nuclear accident of any kind. The first responders to emergency situation need to be protected by personal protective equipment which includes radiation protection suit supplemented with gas mask. The purpose of this work is to estimate the dose to the organs of responder's body as a result of radionuclide deposition in the filtration unit of the gas mask. The problem was analyzed using Monte Carlo simulations. The dose absorbed by different organs for five representative radionuclides and the dose distribution over the responder's body are presented in this paper. Based on presented MC simulations, we suggest a method of evaluating the irradiation of the responder by the radionuclides deposited in the filtration unit of the gas mask. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  8. Influence of Kernel Age on Fumonisin B1 Production in Maize by Fusarium moniliforme

    PubMed Central

    Warfield, Colleen Y.; Gilchrist, David G.

    1999-01-01

    Production of fumonisins by Fusarium moniliforme on naturally infected maize ears is an important food safety concern due to the toxic nature of this class of mycotoxins. Assessing the potential risk of fumonisin production in developing maize ears prior to harvest requires an understanding of the regulation of toxin biosynthesis during kernel maturation. We investigated the developmental-stage-dependent relationship between maize kernels and fumonisin B1 production by using kernels collected at the blister (R2), milk (R3), dough (R4), and dent (R5) stages following inoculation in culture at their respective field moisture contents with F. moniliforme. Highly significant differences (P ≤ 0.001) in fumonisin B1 production were found among kernels at the different developmental stages. The highest levels of fumonisin B1 were produced on the dent stage kernels, and the lowest levels were produced on the blister stage kernels. The differences in fumonisin B1 production among kernels at the different developmental stages remained significant (P ≤ 0.001) when the moisture contents of the kernels were adjusted to the same level prior to inoculation. We concluded that toxin production is affected by substrate composition as well as by moisture content. Our study also demonstrated that fumonisin B1 biosynthesis on maize kernels is influenced by factors which vary with the developmental age of the tissue. The risk of fumonisin contamination may begin early in maize ear development and increases as the kernels reach physiological maturity. PMID:10388675

  9. Towards quantitative imaging: stability of fully automated nodule segmentation across varied dose levels and reconstruction parameters in a low-dose CT screening patient cohort

    NASA Astrophysics Data System (ADS)

    Wahi-Anwar, M. Wasil; Emaminejad, Nastaran; Hoffman, John; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael F.

    2018-02-01

    Quantitative imaging in lung cancer CT seeks to characterize nodules through quantitative features, usually from a region of interest delineating the nodule. The segmentation, however, can vary depending on segmentation approach and image quality, which can affect the extracted feature values. In this study, we utilize a fully-automated nodule segmentation method - to avoid reader-influenced inconsistencies - to explore the effects of varied dose levels and reconstruction parameters on segmentation. Raw projection CT images from a low-dose screening patient cohort (N=59) were reconstructed at multiple dose levels (100%, 50%, 25%, 10%), two slice thicknesses (1.0mm, 0.6mm), and a medium kernel. Fully-automated nodule detection and segmentation was then applied, from which 12 nodules were selected. Dice similarity coefficient (DSC) was used to assess the similarity of the segmentation ROIs of the same nodule across different reconstruction and dose conditions. Nodules at 1.0mm slice thickness and dose levels of 25% and 50% resulted in DSC values greater than 0.85 when compared to 100% dose, with lower dose leading to a lower average and wider spread of DSC values. At 0.6mm, the increased bias and wider spread of DSC values from lowering dose were more pronounced. The effects of dose reduction on DSC for CAD-segmented nodules were similar in magnitude to reducing the slice thickness from 1.0mm to 0.6mm. In conclusion, variation of dose and slice thickness can result in very different segmentations because of noise and image quality. However, there exists some stability in segmentation overlap, as even at 1mm, an image with 25% of the lowdose scan still results in segmentations similar to that seen in a full-dose scan.

  10. Global transport of Fukushima-derived radionuclides from Japan to Asia, North America and Europe. Estimated doses and expected health effects

    NASA Astrophysics Data System (ADS)

    Evangeliou, Nikolaos; Stohl, Andreas; Balkanski, Yves

    2017-04-01

    The earthquake and the subsequent tsunami that occurred offshore of Japan resulted in a serious accident at the nuclear facility of Fukushima. A large number of fission products were released and transported worldwide. We estimate that around 23% of the released 137Cs remained into Japan, while 76% deposited in the oceans. Around 163 TBq deposited over North America, among which 95 TBq over USA, 40 TBq over Canada and 5 TBq over Greenland). About 14 TBq deposited over Europe (mostly in the European part of Russia, Sweden and Norway) and 47 TBq over Asia (mostly in the Asian part of Russia, Philippines and South Korea), while traces were observed over Africa, Oceania and Antarctica. Since the radioactive plume followed a northward direction before its arrival to USA and then to Europe, a significant amount of about 69 TBq deposited in the Arctic, as well. An attempt to assess exposure of the population and the environment showed that the effective dose from gamma irradiation during the first 3 months was estimated between 1-5 mSv in Fukushima and the neighbouring prefectures. In the rest of Japan, the respective doses were found to be less than 0.5 mSv, whereas in the rest of the world it was less than 0.1 mSv. Such doses are equivalent with the obtained dose from a simple X-ray; for the highly contaminated regions, they are close to the dose limit for exposure due to radon inhalation (10 mSv). The calculated dose rates from radiocesium exposure on reference organisms ranged from 0.03 to 0.18 μGy h-1, which are 2 orders of magnitude below the screening dose limit (10 μGy h-1) that could result in obvious effects on the population. However, monitoring data have shown that much higher dose rates were committed to organisms raising ecological risk for small mammals and reptiles in terms of cytogenetic damage and reproduction.

  11. Computation of mean and variance of the radiotherapy dose for PCA-modeled random shape and position variations of the target.

    PubMed

    Budiarto, E; Keijzer, M; Storchi, P R M; Heemink, A W; Breedveld, S; Heijmen, B J M

    2014-01-20

    Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements.

  12. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  13. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  15. Computed tomography coronary stent imaging with iterative reconstruction: a trade-off study between medium kernel and sharp kernel.

    PubMed

    Zhou, Qijing; Jiang, Biao; Dong, Fei; Huang, Peiyu; Liu, Hongtao; Zhang, Minming

    2014-01-01

    To evaluate the improvement of iterative reconstruction in image space (IRIS) technique in computed tomographic (CT) coronary stent imaging with sharp kernel, and to make a trade-off analysis. Fifty-six patients with 105 stents were examined by 128-slice dual-source CT coronary angiography (CTCA). Images were reconstructed using standard filtered back projection (FBP) and IRIS with both medium kernel and sharp kernel applied. Image noise and the stent diameter were investigated. Image noise was measured both in background vessel and in-stent lumen as objective image evaluation. Image noise score and stent score were performed as subjective image evaluation. The CTCA images reconstructed with IRIS were associated with significant noise reduction compared to that of CTCA images reconstructed using FBP technique in both of background vessel and in-stent lumen (the background noise decreased by approximately 25.4% ± 8.2% in medium kernel (P

  16. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  17. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  18. Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.

    PubMed

    Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang

    2017-07-01

    Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.

  19. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Effect of the route of administration on the biodistribution of radioiodinated OV-TL 3 F(ab')2 in experimental ovarian cancer.

    PubMed

    Tibben, J G; Massuger, L F; Boerman, O C; Borm, G F; Claessens, R A; Corstens, F H

    1994-11-01

    The effect of the route administration on the distribution of radioiodinated OV-TL 3 F(ab')2 was studied in Balb/c female mice with intraperitoneal or subcutaneous ovarian carcinoma xenografts. In the intraperitoneal tumour model in which both ascites and solid tumour deposits were present, intraperitoneal administration resulted in a lower estimated radiation dose to blood as compared with intravenous administration. In this model normalization to equal estimated radiation doses to blood for both routes of administration indicated that a twice as high estimated radiation dose can be guided to solid intraperitoneal tumour deposits following intraperitoneal administration. Evacuation of ascitic tumour cells prior to monoclonal antibody injection further increased the estimated radiation dose to solid intraperitoneal tumour deposits following intraperitoneal delivery. Following simultaneous intravenous and intraperitoneal injection of the monoclonal antibody, tissue uptake showed no relevant differences in the subcutaneous tumour model. Overall, the intraperitoneal route of administration was found to be the best choice for therapeutic delivery of iodine-131 labelled monoclonal antibodies.

  1. Adaptive kernel function using line transect sampling

    NASA Astrophysics Data System (ADS)

    Albadareen, Baker; Ismail, Noriszura

    2018-04-01

    The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.

  2. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  3. Impact of droplet evaporation rate on resulting in vitro performance parameters of pressurized metered dose inhalers.

    PubMed

    Sheth, Poonam; Grimes, Matthew R; Stein, Stephen W; Myrdal, Paul B

    2017-08-07

    Pressurized metered dose inhalers (pMDIs) are widely used for the treatment of pulmonary diseases. The overall efficiency of pMDI drug delivery may be defined by in vitro parameters such as the amount of drug that deposits on the model throat and the proportion of the emitted dose that has particles that are sufficiently small to deposit in the lung (i.e., fine particle fraction, FPF). The study presented examines product performance of ten solution pMDI formulations containing a variety of cosolvents with diverse chemical characteristics by cascade impaction with three inlets (USP induction port, Alberta Idealized Throat, and a large volume chamber). Through the data generated in support of this study, it was demonstrated that throat deposition, cascade impactor deposition, FPF, and mass median aerodynamic diameter of solution pMDIs depend on the concentration and vapor pressure of the cosolvent, and the selection of model throat. Theoretical droplet lifetimes were calculated for each formulation using a discrete two-stage evaporation process model and it was determined that the droplet lifetime is highly correlated to throat deposition and FPF indicating that evaporation kinetics significantly influences pMDI drug delivery. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Pollen source effects on growth of kernel structures and embryo chemical compounds in maize.

    PubMed

    Tanaka, W; Mantese, A I; Maddonni, G A

    2009-08-01

    Previous studies have reported effects of pollen source on the oil concentration of maize (Zea mays) kernels through modifications to both the embryo/kernel ratio and embryo oil concentration. The present study expands upon previous analyses by addressing pollen source effects on the growth of kernel structures (i.e. pericarp, endosperm and embryo), allocation of embryo chemical constituents (i.e. oil, protein, starch and soluble sugars), and the anatomy and histology of the embryos. Maize kernels with different oil concentration were obtained from pollinations with two parental genotypes of contrasting oil concentration. The dynamics of the growth of kernel structures and allocation of embryo chemical constituents were analysed during the post-flowering period. Mature kernels were dissected to study the anatomy (embryonic axis and scutellum) and histology [cell number and cell size of the scutellums, presence of sub-cellular structures in scutellum tissue (starch granules, oil and protein bodies)] of the embryos. Plants of all crosses exhibited a similar kernel number and kernel weight. Pollen source modified neither the growth period of kernel structures, nor pericarp growth rate. By contrast, pollen source determined a trade-off between embryo and endosperm growth rates, which impacted on the embryo/kernel ratio of mature kernels. Modifications to the embryo size were mediated by scutellum cell number. Pollen source also affected (P < 0.01) allocation of embryo chemical compounds. Negative correlations among embryo oil concentration and those of starch (r = 0.98, P < 0.01) and soluble sugars (r = 0.95, P < 0.05) were found. Coincidently, embryos with low oil concentration had an increased (P < 0.05-0.10) scutellum cell area occupied by starch granules and fewer oil bodies. The effects of pollen source on both embryo/kernel ratio and allocation of embryo chemicals seems to be related to the early established sink strength (i.e. sink size and sink activity) of the embryos.

  5. Pulmonary deposition of fluticasone propionate/formoterol in healthy volunteers, asthmatics and COPD patients with a novel breath-triggered inhaler.

    PubMed

    Kappeler, Dominik; Sommerer, Knut; Kietzig, Claudius; Huber, Bärbel; Woodward, Jo; Lomax, Mark; Dalvi, Prashant

    2018-05-01

    A combination of fluticasone propionate/formoterol fumarate (FP/FORM) has been incorporated within a novel, breath-triggered device, named K-haler ® . This low resistance device requires a gentle inspiratory effort to actuate it, triggering at an inspiratory flow rate of approximately 30 L/min; thus avoiding the need for coordination of inhalation with manual canister depression. The aim of the study was to evaluate total and regional pulmonary deposition of FP/FORM when administered via the K-haler device. Twelve healthy subjects, 12 asthmatics, and 12 COPD patients each received a single dose of 2 puffs 99m technetium-labelled FP/FORM 125/5 μg. A gamma camera was used to obtain anterior and posterior two-dimensional images of drug deposition. Prior transmission scans (using a 99m technetium flood source) allowed the definition of regions of interest and calculation of attenuation correction factors. Image analysis was performed per standardised methods. Of 36 subjects, 35 provided evaluable post-dose scintigraphic data. Mean subject ages were 35.7 (healthy), 44.5 (asthma) and 61.7 years (COPD); mean FEV 1 % predicted values were 109.8%, 77.4% and 43.2%, respectively. Mean pulmonary deposition was 26.6% (healthy), 44.7% (asthma), 39.0% (COPD) of the delivered dose. The respective mean penetration indices (peripheral:central ratio normalised to a transmission lung scan) were 0.44, 0.31 and 0.30. FP/FORM administration via the K-haler device resulted in high lung deposition in patients with obstructive lung disease but somewhat lesser deposition in healthy subjects. Regional deposition data demonstrated drug deposition in both the central and peripheral regions in all subject populations. 2015-000744-42. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. 7 CFR 868.254 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.254 Section 868.254 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Governing Application of Standards § 868.254 Broken kernels determination. Broken kernels shall be...

  7. 7 CFR 51.2090 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defect which makes a kernel or piece of kernel unsuitable for human consumption, and includes decay...: Shriveling when the kernel is seriously withered, shrunken, leathery, tough or only partially developed: Provided, that partially developed kernels are not considered seriously damaged if more than one-fourth of...

  8. Anisotropic hydrodynamics with a scalar collisional kernel

    NASA Astrophysics Data System (ADS)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  9. Ideal regularization for learning kernels from labels.

    PubMed

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Straight-chain halocarbon forming fluids for TRISO fuel kernel production - Tests with yttria-stabilized zirconia microspheres

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.

    2015-03-01

    Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  11. Numerical study of the ignition behavior of a post-discharge kernel injected into a turbulent stratified cross-flow

    NASA Astrophysics Data System (ADS)

    Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias

    2017-11-01

    The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.

  12. The site, size, spatial stability, and energetics of an X-ray flare kernel

    NASA Technical Reports Server (NTRS)

    Petrasso, R.; Gerassimenko, M.; Nolte, J.

    1979-01-01

    The site, size evolution, and energetics of an X-ray kernel that dominated a solar flare during its rise and somewhat during its peak are investigated. The position of the kernel remained stationary to within about 3 arc sec over the 30-min interval of observations, despite pulsations in the kernel X-ray brightness in excess of a factor of 10. This suggests a tightly bound, deeply rooted magnetic structure, more plausibly associated with the near chromosphere or low corona rather than with the high corona. The H-alpha flare onset coincided with the appearance of the kernel, again suggesting a close spatial and temporal coupling between the chromospheric H-alpha event and the X-ray kernel. At the first kernel brightness peak its size was no larger than about 2 arc sec, when it accounted for about 40% of the total flare flux. In the second rise phase of the kernel, a source power input of order 2 times 10 to the 24th ergs/sec is minimally required.

  13. SU-E-I-15: Quantitative Evaluation of Dose Distributions From Axial, Helical and Cone-Beam CT Imaging by Measurement Using a Two-Dimensional Diode-Array Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chacko, M; Aldoohan, S; Sonnad, J

    2015-06-15

    Purpose: To evaluate quantitatively dose distributions from helical, axial and cone-beam CT clinical imaging techniques by measurement using a two-dimensional (2D) diode-array detector. Methods: 2D-dose distributions from selected clinical protocols used for axial, helical and cone-beam CT imaging were measured using a diode-array detector (MapCheck2). The MapCheck2 is composed from solid state diode detectors that are arranged in horizontal and vertical lines with a spacing of 10 mm. A GE-Light-Speed CT-simulator was used to acquire axial and helical CT images and a kV on-board-imager integrated with a Varian TrueBeam-STx machine was used to acquire cone-beam CT (CBCT) images. Results: Themore » dose distributions from axial, helical and cone-beam CT were non-uniform over the region-of-interest with strong spatial and angular dependence. In axial CT, a large dose gradient was measured that decreased from lateral sides to the middle of the phantom due to large superficial dose at the side of the phantom in comparison with larger beam attenuation at the center. The dose decreased at the superior and inferior regions in comparison to the center of the phantom in axial CT. An asymmetry was found between the right-left or superior-inferior sides of the phantom which possibly to angular dependence in the dose distributions. The dose level and distribution varied from one imaging technique into another. For the pelvis technique, axial CT deposited a mean dose of 3.67 cGy, helical CT deposited a mean dose of 1.59 cGy, and CBCT deposited a mean dose of 1.62 cGy. Conclusions: MapCheck2 provides a robust tool to measure directly 2D-dose distributions for CT imaging with high spatial resolution detectors in comparison with ionization chamber that provides a single point measurement or an average dose to the phantom. The dose distributions measured with MapCheck2 consider medium heterogeneity and can represent specific patient dose.« less

  14. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  15. Effects of Amygdaline from Apricot Kernel on Transplanted Tumors in Mice.

    PubMed

    Yamshanov, V A; Kovan'ko, E G; Pustovalov, Yu I

    2016-03-01

    The effects of amygdaline from apricot kernel added to fodder on the growth of transplanted LYO-1 and Ehrlich carcinoma were studied in mice. Apricot kernels inhibited the growth of both tumors. Apricot kernels, raw and after thermal processing, given 2 days before transplantation produced a pronounced antitumor effect. Heat-processed apricot kernels given in 3 days after transplantation modified the tumor growth and prolonged animal lifespan. Thermal treatment did not considerably reduce the antitumor effect of apricot kernels. It was hypothesized that the antitumor effect of amygdaline on Ehrlich carcinoma and LYO-1 lymphosarcoma was associated with the presence of bacterial genome in the tumor.

  16. Development of a kernel function for clinical data.

    PubMed

    Daemen, Anneleen; De Moor, Bart

    2009-01-01

    For most diseases and examinations, clinical data such as age, gender and medical history guides clinical management, despite the rise of high-throughput technologies. To fully exploit such clinical information, appropriate modeling of relevant parameters is required. As the widely used linear kernel function has several disadvantages when applied to clinical data, we propose a new kernel function specifically developed for this data. This "clinical kernel function" more accurately represents similarities between patients. Evidently, three data sets were studied and significantly better performances were obtained with a Least Squares Support Vector Machine when based on the clinical kernel function compared to the linear kernel function.

  17. Manycore Performance-Portability: Kokkos Multidimensional Array Library

    DOE PAGES

    Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...

    2012-01-01

    Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less

  18. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    PubMed

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  19. Metabolic network prediction through pairwise rational kernels.

    PubMed

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy values have been improved, while maintaining lower construction and execution times. The power of using kernels is that almost any sort of data can be represented using kernels. Therefore, completely disparate types of data can be combined to add power to kernel-based machine learning methods. When we compared our proposal using PRKs with other similar kernel, the execution times were decreased, with no compromise of accuracy. We also proved that by combining PRKs with other kernels that include evolutionary information, the accuracy can also also be improved. As our proposal can use any type of sequence data, genes do not need to be properly annotated, avoiding accumulation errors because of incorrect previous annotations.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, C A; Clarke, S D; Pozzi, S A

    Purpose: To develop an instrument for measuring neutron and photon dose rates from mixed fields with a single device. Methods: Stilbene organic scintillators can be used to detect fast neutrons and photons. Stilbene was used to measure emission from mixed particle sources californium-252 (Cf-252) and plutonium-beryllium (PuBe). Many source detector configurations were used, along with varying amounts of shielding. Collected spectra were analyzed using pulse shape discrimination software, to separate neutron and photon interactions. With a measured light output to energy relationship the pulse height spectrum was converted to energy deposited in the detector. Energy deposited was converted to dosemore » with a variety of standard dose factors, for comparison to current methods. For validation, all measurements and processing was repeated using an EJ-309 liquid scintillator detector. Dose rates were also measured in the same configuration with commercially available dose meters for further validation. Results: Measurements of dose rates will show agreement across all methods. Higher accuracy of pulse shape discrimination at lower energies with stilbene leads to more accurate measurement of neutron and photon deposited dose. In strong fields of mixed particles discrimination can be performed well at a very low energy threshold. This shows accurate dose measurements over a large range of incident particle energy. Conclusion: Stilbene shows promise as a material for dose rate measurements due to its strong ability for separating neutrons and photon pulses and agreement with current methods. A dual particle dose meter would simplify methods which are currently limited to the measurement of only one particle type. Future work will investigate the use of a silicon photomultiplier to reduce the size and required voltage of the assembly, for practical use as a handheld survey meter, room monitor, or phantom installation. Funding From the United States Department of Energy and the National Nuclear Security Administration.« less

  1. Differential metabolome analysis of field-grown maize kernels in response to drought stress

    USDA-ARS?s Scientific Manuscript database

    Drought stress constrains maize kernel development and can exacerbate aflatoxin contamination. In order to identify drought responsive metabolites and explore pathways involved in kernel responses, a metabolomics analysis was conducted on kernels from a drought tolerant line, Lo964, and a sensitive ...

  2. Occurrence of 'super soft' wheat kernel texture in hexaploid and tetraploid wheats

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel texture is a key trait that governs milling performance, flour starch damage, flour particle size, flour hydration properties, and baking quality. Kernel texture is commonly measured using the Perten Single Kernel Characterization System (SKCS). The SKCS returns texture values (Hardness...

  3. 7 CFR 868.203 - Basis of determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...

  4. 7 CFR 868.203 - Basis of determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...

  5. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 7 2011-01-01 2011-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...

  6. 7 CFR 868.304 - Broken kernels determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Broken kernels determination. 868.304 Section 868.304 Agriculture Regulations of the Department of Agriculture (Continued) GRAIN INSPECTION, PACKERS AND STOCKYARD... Application of Standards § 868.304 Broken kernels determination. Broken kernels shall be determined by the use...

  7. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.

  8. Performance Characteristics of a Kernel-Space Packet Capture Module

    DTIC Science & Technology

    2010-03-01

    Defense, or the United States Government . AFIT/GCO/ENG/10-03 PERFORMANCE CHARACTERISTICS OF A KERNEL-SPACE PACKET CAPTURE MODULE THESIS Presented to the...3.1.2.3 Prototype. The proof of concept for this research is the design, development, and comparative performance analysis of a kernel level N2d capture...changes to kernel code 5. Can be used for both user-space and kernel-space capture applications in order to control comparative performance analysis to

  9. Model-derived dose rates per unit concentration of radon in air in a generic plant geometry.

    PubMed

    Vives i Batlle, J; Smith, A; Vives-Lynch, S; Copplestone, D; Pröhl, G; Strand, T

    2011-11-01

    A model for the derivation of dose rates per unit radon concentration in plants was developed in line with the activities of a Task Group of the International Commission on Radiological Protection (ICRP), aimed at developing more realistic dosimetry for non-human biota. The model considers interception of the unattached and attached fractions of the airborne radon daughters by plant stomata, diffusion of radon gas through stomata, permeation through the plant's epidermis and translocation of deposited activity to plant interior. The endpoint of the model is the derivation of dose conversion coefficients relative to radon gas concentration at ground level. The model predicts that the main contributor to dose is deposition of (214)Po α-activity on the plant surface and that diffusion of radon daughters through the stomata is of relatively minor importance; hence, daily variations have a small effect on total dose.

  10. High-throughput method for ear phenotyping and kernel weight estimation in maize using ear digital imaging.

    PubMed

    Makanza, R; Zaman-Allah, M; Cairns, J E; Eyre, J; Burgueño, J; Pacheco, Ángela; Diepenbrock, C; Magorokosho, C; Tarekegne, A; Olsen, M; Prasanna, B M

    2018-01-01

    Grain yield, ear and kernel attributes can assist to understand the performance of maize plant under different environmental conditions and can be used in the variety development process to address farmer's preferences. These parameters are however still laborious and expensive to measure. A low-cost ear digital imaging method was developed that provides estimates of ear and kernel attributes i.e., ear number and size, kernel number and size as well as kernel weight from photos of ears harvested from field trial plots. The image processing method uses a script that runs in a batch mode on ImageJ; an open source software. Kernel weight was estimated using the total kernel number derived from the number of kernels visible on the image and the average kernel size. Data showed a good agreement in terms of accuracy and precision between ground truth measurements and data generated through image processing. Broad-sense heritability of the estimated parameters was in the range or higher than that for measured grain weight. Limitation of the method for kernel weight estimation is discussed. The method developed in this work provides an opportunity to significantly reduce the cost of selection in the breeding process, especially for resource constrained crop improvement programs and can be used to learn more about the genetic bases of grain yield determinants.

  11. A Kernel-based Lagrangian method for imperfectly-mixed chemical reactions

    NASA Astrophysics Data System (ADS)

    Schmidt, Michael J.; Pankavich, Stephen; Benson, David A.

    2017-05-01

    Current Lagrangian (particle-tracking) algorithms used to simulate diffusion-reaction equations must employ a certain number of particles to properly emulate the system dynamics-particularly for imperfectly-mixed systems. The number of particles is tied to the statistics of the initial concentration fields of the system at hand. Systems with shorter-range correlation and/or smaller concentration variance require more particles, potentially limiting the computational feasibility of the method. For the well-known problem of bimolecular reaction, we show that using kernel-based, rather than Dirac delta, particles can significantly reduce the required number of particles. We derive the fixed width of a Gaussian kernel for a given reduced number of particles that analytically eliminates the error between kernel and Dirac solutions at any specified time. We also show how to solve for the fixed kernel size by minimizing the squared differences between solutions over any given time interval. Numerical results show that the width of the kernel should be kept below about 12% of the domain size, and that the analytic equations used to derive kernel width suffer significantly from the neglect of higher-order moments. The simulations with a kernel width given by least squares minimization perform better than those made to match at one specific time. A heuristic time-variable kernel size, based on the previous results, performs on par with the least squares fixed kernel size.

  12. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  13. Brain tumor image segmentation using kernel dictionary learning.

    PubMed

    Jeon Lee; Seung-Jun Kim; Rong Chen; Herskovits, Edward H

    2015-08-01

    Automated brain tumor image segmentation with high accuracy and reproducibility holds a big potential to enhance the current clinical practice. Dictionary learning (DL) techniques have been applied successfully to various image processing tasks recently. In this work, kernel extensions of the DL approach are adopted. Both reconstructive and discriminative versions of the kernel DL technique are considered, which can efficiently incorporate multi-modal nonlinear feature mappings based on the kernel trick. Our novel discriminative kernel DL formulation allows joint learning of a task-driven kernel-based dictionary and a linear classifier using a K-SVD-type algorithm. The proposed approaches were tested using real brain magnetic resonance (MR) images of patients with high-grade glioma. The obtained preliminary performances are competitive with the state of the art. The discriminative kernel DL approach is seen to reduce computational burden without much sacrifice in performance.

  14. SEMI-SUPERVISED OBJECT RECOGNITION USING STRUCTURE KERNEL

    PubMed Central

    Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Ling, Fan

    2013-01-01

    Object recognition is a fundamental problem in computer vision. Part-based models offer a sparse, flexible representation of objects, but suffer from difficulties in training and often use standard kernels. In this paper, we propose a positive definite kernel called “structure kernel”, which measures the similarity of two part-based represented objects. The structure kernel has three terms: 1) the global term that measures the global visual similarity of two objects; 2) the part term that measures the visual similarity of corresponding parts; 3) the spatial term that measures the spatial similarity of geometric configuration of parts. The contribution of this paper is to generalize the discriminant capability of local kernels to complex part-based object models. Experimental results show that the proposed kernel exhibit higher accuracy than state-of-art approaches using standard kernels. PMID:23666108

  15. Burrower bugs (Heteroptera: Cydnidae) in peanut: seasonal species abundance, tillage effects, grade reduction effects, insecticide efficacy, and management.

    PubMed

    Chapin, Jay W; Thomas, James S

    2003-08-01

    Pitfall traps placed in South Carolina peanut, Arachis hypogaea (L.), fields collected three species of burrower bugs (Cydnidae): Cyrtomenus ciliatus (Palisot de Beauvois), Sehirus cinctus cinctus (Palisot de Beauvois), and Pangaeus bilineatus (Say). Cyrtomenus ciliatus was rarely collected. Sehirus cinctus produced a nymphal cohort in peanut during May and June, probably because of abundant henbit seeds, Lamium amplexicaule L., in strip-till production systems. No S. cinctus were present during peanut pod formation. Pangaeus bilineatus was the most abundant species collected and the only species associated with peanut kernel feeding injury. Overwintering P. bilineatus adults were present in a conservation tillage peanut field before planting and two to three subsequent generations were observed. Few nymphs were collected until the R6 (full seed) growth stage. Tillage and choice of cover crop affected P. bilineatus populations. Peanuts strip-tilled into corn or wheat residue had greater P. bilineatus populations and kernel-feeding than conventional tillage or strip-tillage into rye residue. Fall tillage before planting a wheat cover crop also reduced burrower bug feeding on peanut. At-pegging (early July) granular chlorpyrifos treatments were most consistent in suppressing kernel feeding. Kernels fed on by P. bilineatus were on average 10% lighter than unfed on kernels. Pangaeus bilineatus feeding reduced peanut grade by reducing individual kernel weight, and increasing the percentage damaged kernels. Each 10% increase in kernels fed on by P. bilineatus was associated with a 1.7% decrease in total sound mature kernels, and kernel feeding levels above 30% increase the risk of damaged kernel grade penalties.

  16. Imaging and automated detection of Sitophilus oryzae (Coleoptera: Curculionidae) pupae in hard red winter wheat.

    PubMed

    Toews, Michael D; Pearson, Tom C; Campbell, James F

    2006-04-01

    Computed tomography, an imaging technique commonly used for diagnosing internal human health ailments, uses multiple x-rays and sophisticated software to recreate a cross-sectional representation of a subject. The use of this technique to image hard red winter wheat, Triticum aestivm L., samples infested with pupae of Sitophilus oryzae (L.) was investigated. A software program was developed to rapidly recognize and quantify the infested kernels. Samples were imaged in a 7.6-cm (o.d.) plastic tube containing 0, 50, or 100 infested kernels per kg of wheat. Interkernel spaces were filled with corn oil so as to increase the contrast between voids inside kernels and voids among kernels. Automated image processing, using a custom C language software program, was conducted separately on each 100 g portion of the prepared samples. The average detection accuracy in the five infested kernels per 100-g samples was 94.4 +/- 7.3% (mean +/- SD, n = 10), whereas the average detection accuracy in the 10 infested kernels per 100-g sample was 87.3 +/- 7.9% (n = 10). Detection accuracy in the 10 infested kernels per 100-g samples was slightly less than the five infested kernels per 100-g samples because of some infested kernels overlapping with each other or air bubbles in the oil. A mean of 1.2 +/- 0.9 (n = 10) bubbles (per tube) was incorrectly classed as infested kernels in replicates containing no infested kernels. In light of these positive results, future studies should be conducted using additional grains, insect species, and life stages.

  17. Relationship of source and sink in determining kernel composition of maize

    PubMed Central

    Seebauer, Juliann R.; Singletary, George W.; Krumpelman, Paulette M.; Ruffo, Matías L.; Below, Frederick E.

    2010-01-01

    The relative role of the maternal source and the filial sink in controlling the composition of maize (Zea mays L.) kernels is unclear and may be influenced by the genotype and the N supply. The objective of this study was to determine the influence of assimilate supply from the vegetative source and utilization of assimilates by the grain sink on the final composition of maize kernels. Intermated B73×Mo17 recombinant inbred lines (IBM RILs) which displayed contrasting concentrations of endosperm starch were grown in the field with deficient or sufficient N, and the source supply altered by ear truncation (45% reduction) at 15 d after pollination (DAP). The assimilate supply into the kernels was determined at 19 DAP using the agar trap technique, and the final kernel composition was measured. The influence of N supply and kernel ear position on final kernel composition was also determined for a commercial hybrid. Concentrations of kernel protein and starch could be altered by genotype or the N supply, but remained fairly constant along the length of the ear. Ear truncation also produced a range of variation in endosperm starch and protein concentrations. The C/N ratio of the assimilate supply at 19 DAP was directly related to the final kernel composition, with an inverse relationship between the concentrations of starch and protein in the mature endosperm. The accumulation of kernel starch and protein in maize is uniform along the ear, yet adaptable within genotypic limits, suggesting that kernel composition is source limited in maize. PMID:19917600

  18. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects. Copyright © 2016 Crop Science Society of America.

  19. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    PubMed

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  20. Image quality of mixed convolution kernel in thoracic computed tomography.

    PubMed

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  1. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing, manufacturing, packing, processing, preparing, treating...

  2. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  3. 7 CFR 51.1241 - Damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... which have been broken to the extent that the kernel within is plainly visible without minute... discoloration beneath, but the peanut shall be judged as it appears with the talc. (c) Kernels which are rancid or decayed. (d) Moldy kernels. (e) Kernels showing sprouts extending more than one-eighth inch from...

  4. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  5. 7 CFR 981.60 - Determination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  6. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  7. 7 CFR 999.400 - Regulation governing the importation of filberts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Definitions. (1) Filberts means filberts or hazelnuts. (2) Inshell filberts means filberts, the kernels or edible portions of which are contained in the shell. (3) Shelled filberts means the kernels of filberts... Filbert kernels or portions of filbert kernels shall meet the following requirements: (1) Well dried and...

  8. 7 CFR 51.1404 - Tolerances.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    .... (2) For kernel defects, by count. (i) 12 percent for pecans with kernels which fail to meet the... kernels which are seriously damaged: Provided, That not more than six-sevenths of this amount, or 6 percent, shall be allowed for kernels which are rancid, moldy, decayed or injured by insects: And provided...

  9. Enhanced gluten properties in soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  10. End-use quality of soft kernel durum wheat

    USDA-ARS?s Scientific Manuscript database

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  11. 7 CFR 51.2560 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... are excessively thin kernels and can have black, brown or gray surface with a dark interior color and the immaturity has adversely affected the flavor of the kernel. (2) Kernel spotting refers to dark brown or dark gray spots aggregating more than one-eighth of the surface of the kernel. (g) Serious...

  12. 7 CFR 51.2560 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... are excessively thin kernels and can have black, brown or gray surface with a dark interior color and the immaturity has adversely affected the flavor of the kernel. (2) Kernel spotting refers to dark brown or dark gray spots aggregating more than one-eighth of the surface of the kernel. (g) Serious...

  13. 7 CFR 51.1416 - Optional determinations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... throughout the lot. (a) Edible kernel content. A minimum sample of at least 500 grams of in-shell pecans shall be used for determination of edible kernel content. After the sample is weighed and shelled... determine edible kernel content for the lot. (b) Poorly developed kernel content. A minimum sample of at...

  14. Monte Carlo calculation of the sensitivity of a commercial dose calibrator to gamma and beta radiation.

    PubMed

    Laedermann, Jean-Pascal; Valley, Jean-François; Bulling, Shelley; Bochud, François O

    2004-06-01

    The detection process used in a commercial dose calibrator was modeled using the GEANT 3 Monte Carlo code. Dose calibrator efficiency for gamma and beta emitters, and the response to monoenergetic photons and electrons was calculated. The model shows that beta emitters below 2.5 MeV deposit energy indirectly in the detector through bremsstrahlung produced in the chamber wall or in the source itself. Higher energy beta emitters (E > 2.5 MeV) deposit energy directly in the chamber sensitive volume, and dose calibrator sensitivity increases abruptly for these radionuclides. The Monte Carlo calculations were compared with gamma and beta emitter measurements. The calculations show that the variation in dose calibrator efficiency with measuring conditions (source volume, container diameter, container wall thickness and material, position of the source within the calibrator) is relatively small and can be considered insignificant for routine measurement applications. However, dose calibrator efficiency depends strongly on the inner-wall thickness of the detector.

  15. Angular dose anisotropy around gold nanoparticles exposed to X-rays.

    PubMed

    Gadoue, Sherif M; Toomeh, Dolla; Zygmanski, Piotr; Sajo, Erno

    2017-07-01

    Gold nanoparticle (GNP) radiotherapy has recently emerged as a promising modality in cancer treatment. The use of high atomic number nanoparticles can lead to enhanced radiation dose in tumors due to low-energy leakage electrons depositing in the vicinity of the GNP. A single metric, the dose enhancement ratio has been used in the literature, often in substantial disagreement, to quantify the GNP's capacity to increase local energy deposition. This 1D approach neglects known sources of dose anisotropy and assumes that one average value is representative of the dose enhancement. Whether this assumption is correct and within what accuracy limits it could be trusted, have not been studied due to computational difficulties at the nanoscale. Using a next-generation deterministic computational method, we show that significant dose anisotropy exists which may have radiobiological consequences, and can impact the treatment outcome as well as the development of treatment planning computational methods. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. ESTIMATION OF EFFECTIVE DOSE FROM EXTERNAL EXPOSURE DUE TO SHORT-LIVED NUCLIDES IN THE PREFECTURES SURROUNDING FUKUSHIMA.

    PubMed

    Miyatake, Hirokazu; Yoshizawa, Nobuaki; Suzuki, Gen

    2018-05-11

    The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident resulted in a release of radionuclides into the environment. Since the accident, measurements of radiation in the environment such as air dose rate and deposition density of radionuclides have been performed by various organizations and universities. In particular, Japan Atomic Energy Agency (JAEA) has been performing observations of air dose rate using a car-borne survey system continuously over widespread areas. Based on the data measured by JAEA, we estimated effective dose from external exposure in the prefectures surrounding Fukushima. Since car-borne survey started a few months after the accident, the main contribution to measured data comes from 137Cs and 134Cs whose half-lives are relatively long. Using air dose rate of 137Cs and 134Cs and the ratio of deposition density of short-lived nuclides to that of 137Cs and 134Cs, we also estimated contributions to the effective dose from other short-lived nuclides.

  17. Testicular Doses in Image-Guided Radiotherapy of Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng Jun, E-mail: jun.deng@yale.edu; Chen Zhe; Yu, James B.

    Purpose: To investigate testicular doses contributed by kilovoltage cone-beam computed tomography (kVCBCT) during image-guided radiotherapy (IGRT) of prostate cancer. Methods and Materials: An EGS4 Monte Carlo code was used to calculate three-dimensional dose distributions from kVCBCT on 3 prostate cancer patients. Absorbed doses to various organs were compared between intensity-modulated radiotherapy (IMRT) treatments and kVCBCT scans. The impact of CBCT scanning mode, kilovoltage peak energy (kVp), and CBCT field span on dose deposition to testes and other organs was investigated. Results: In comparison with one 10-MV IMRT treatment, a 125-kV half-fan CBCT scan delivered 3.4, 3.8, 4.1, and 5.7 cGymore » to the prostate, rectum, bladder, and femoral heads, respectively, accounting for 1.7%, 3.2%, 3.2%, and 8.4% of megavoltage photon dose contributions. However, the testes received 2.9 cGy from the same CBCT scan, a threefold increase as compared with 0.7 cGy received during IMRT. With the same kVp, full-fan mode deposited much less dose to organs than half-fan mode, ranging from 9% less for prostate to 69% less for testes, except for rectum, where full-fan mode delivered 34% more dose. As photon beam energy increased from 60 to 125 kV, kVCBCT-contributed doses increased exponentially for all organs, irrespective of scanning mode. Reducing CBCT field span from 30 to 10 cm in the superior-inferior direction cut testicular doses from 5.7 to 0.2 cGy in half-fan mode and from 1.5 to 0.1 cGy in full-fan mode. Conclusions: Compared with IMRT, kVCBCT-contributed doses to the prostate, rectum, bladder, and femoral heads are clinically insignificant, whereas dose to the testes is threefold more. Full-fan CBCT usually deposits much less dose to organs (except for rectum) than half-fan mode in prostate patients. Kilovoltage CBCT-contributed doses increase exponentially with photon beam energy. Reducing CBCT field significantly cuts doses to testes and other organs.« less

  18. 3DRT-MPASS

    NASA Technical Reports Server (NTRS)

    Lickly, Ben

    2005-01-01

    Data from all current JPL missions are stored in files called SPICE kernels. At present, animators who want to use data from these kernels have to either read through the kernels looking for the desired data, or write programs themselves to retrieve information about all the needed objects for their animations. In this project, methods of automating the process of importing the data from the SPICE kernels were researched. In particular, tools were developed for creating basic scenes in Maya, a 3D computer graphics software package, from SPICE kernels.

  19. Experimental determination of the respiratory tract deposition of diesel combustion particles in patients with chronic obstructive pulmonary disease

    PubMed Central

    2012-01-01

    Background Air pollution, mainly from combustion, is one of the leading global health risk factors. A susceptible group is the more than 200 million people worldwide suffering from chronic obstructive pulmonary disease (COPD). There are few data on lung deposition of airborne particles in patients with COPD and none for combustion particles. Objectives To determine respiratory tract deposition of diesel combustion particles in patients with COPD during spontaneous breathing. Methods Ten COPD patients and seven healthy subjects inhaled diesel exhaust particles generated during idling and transient driving in an exposure chamber. The respiratory tract deposition of the particles was measured in the size range 10–500 nm during spontaneous breathing. Results The deposited dose rate increased with increasing severity of the disease. However, the deposition probability of the ultrafine combustion particles (< 100 nm) was decreased in COPD patients. The deposition probability was associated with both breathing parameters and lung function, but could be predicted only based on lung function. Conclusions The higher deposited dose rate of inhaled air pollution particles in COPD patients may be one of the factors contributing to their increased vulnerability. The strong correlations between lung function and particle deposition, especially in the size range of 20–30 nm, suggest that altered particle deposition could be used as an indicator respiratory disease. PMID:22839109

  20. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  1. Assessment of doses to game animals in Finland.

    PubMed

    Vetikko, Virve; Kostiainen, Eila

    2013-11-01

    A study was carried out to assess the dose rates to game animals in Finland affected by the radioactive caesium deposition that occurred after the accident at the Chernobyl nuclear power plant in Ukraine in 1986. The aim of this assessment was to obtain new information on the dose rates to mammals and birds under Finnish conditions. Dose rates were calculated using the ERICA Assessment Tool developed within the EC 6th Framework Programme. The input data consisted of measured activity concentrations of (137)Cs and (134)Cs in soil and lake water samples and in flesh samples of selected animal species obtained for environmental monitoring. The study sites were located in the municipality of Lammi, Southern Finland, where the average (137)Cs deposition was 46.5 kBq m(-2) (1 October 1987). The study sites represented the areas receiving the highest deposition in Finland after the Chernobyl accident. The selected species included moose (Alces alces), arctic hare (Lepus timidus) and several bird species: black grouse (Tetrao tetrix), hazel hen (Bonasia bonasia), mallard (Anas platurhynchos), goldeneye (Bucephala clangula) and teal (Anas crecca). For moose, dose rates were calculated for the years 1986-1990 and for the 2000s. For all other species, maximal measured activity concentrations were used. The results showed that the dose rates to these species did not exceed the default screening level of 10 μGy h(-1) used as a protection criterion. The highest total dose rate (internal and external summed), 3.7 μGy h(-1), was observed for the arctic hare in 1986. Although the dose rate of 3.7 μGy h(-1) cannot be considered negligible given the uncertainties involved in predicting the dose rates, the possible harmful effects related to this dose rate are too small to be assessed based on current knowledge on the biological effects of low doses in mammals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. REGIONAL DEPOSITION DOSE OF INHALED NANO-SIZE PARTICLES IN HUMAN LUNGS DURING CONTROLLED NORMAL BREATHING

    EPA Science Inventory

    INTRODUCTION

    One of the key factors for affecting respiratory

    deposition of particles is the breathing pattern of

    individual subjects. Although idealized breathing

    patterns (square or sine wave form) are frequently used

    for studying lung deposit...

  3. UNIVERSAL RELATIONSHIP OF TOTAL LUNG DEPOSITION OF PARTICLES IN NORMAL ADULTS WITH PARTICLE SIZE AND BREATHING PATTERN

    EPA Science Inventory

    Particulate matter in the air is known for causing adverse health effects and yet estimating lung deposition dose is difficult because exposure conditions vary widely. We measured total deposition fraction (TDF) of monodisperse aerosols in the size range of 0.04 - 5 micron in dia...

  4. Graph wavelet alignment kernels for drug virtual screening.

    PubMed

    Smalter, Aaron; Huan, Jun; Lushington, Gerald

    2009-06-01

    In this paper, we introduce a novel statistical modeling technique for target property prediction, with applications to virtual screening and drug design. In our method, we use graphs to model chemical structures and apply a wavelet analysis of graphs to summarize features capturing graph local topology. We design a novel graph kernel function to utilize the topology features to build predictive models for chemicals via Support Vector Machine classifier. We call the new graph kernel a graph wavelet-alignment kernel. We have evaluated the efficacy of the wavelet-alignment kernel using a set of chemical structure-activity prediction benchmarks. Our results indicate that the use of the kernel function yields performance profiles comparable to, and sometimes exceeding that of the existing state-of-the-art chemical classification approaches. In addition, our results also show that the use of wavelet functions significantly decreases the computational costs for graph kernel computation with more than ten fold speedup.

  5. Oecophylla longinoda (Hymenoptera: Formicidae) Lead to Increased Cashew Kernel Size and Kernel Quality.

    PubMed

    Anato, F M; Sinzogan, A A C; Offenberg, J; Adandonon, A; Wargui, R B; Deguenon, J M; Ayelo, P M; Vayssières, J-F; Kossou, D K

    2017-06-01

    Weaver ants, Oecophylla spp., are known to positively affect cashew, Anacardium occidentale L., raw nut yield, but their effects on the kernels have not been reported. We compared nut size and the proportion of marketable kernels between raw nuts collected from trees with and without ants. Raw nuts collected from trees with weaver ants were 2.9% larger than nuts from control trees (i.e., without weaver ants), leading to 14% higher proportion of marketable kernels. On trees with ants, the kernel: raw nut ratio from nuts damaged by formic acid was 4.8% lower compared with nondamaged nuts from the same trees. Weaver ants provided three benefits to cashew production by increasing yields, yielding larger nuts, and by producing greater proportions of marketable kernel mass. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  7. Small convolution kernels for high-fidelity image restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1991-01-01

    An algorithm is developed for computing the mean-square-optimal values for small, image-restoration kernels. The algorithm is based on a comprehensive, end-to-end imaging system model that accounts for the important components of the imaging process: the statistics of the scene, the point-spread function of the image-gathering device, sampling effects, noise, and display reconstruction. Subject to constraints on the spatial support of the kernel, the algorithm generates the kernel values that restore the image with maximum fidelity, that is, the kernel minimizes the expected mean-square restoration error. The algorithm is consistent with the derivation of the spatially unconstrained Wiener filter, but leads to a small, spatially constrained kernel that, unlike the unconstrained filter, can be efficiently implemented by convolution. Simulation experiments demonstrate that for a wide range of imaging systems these small kernels can restore images with fidelity comparable to images restored with the unconstrained Wiener filter.

  8. Kernels, Degrees of Freedom, and Power Properties of Quadratic Distance Goodness-of-Fit Tests

    PubMed Central

    Lindsay, Bruce G.; Markatou, Marianthi; Ray, Surajit

    2014-01-01

    In this article, we study the power properties of quadratic-distance-based goodness-of-fit tests. First, we introduce the concept of a root kernel and discuss the considerations that enter the selection of this kernel. We derive an easy to use normal approximation to the power of quadratic distance goodness-of-fit tests and base the construction of a noncentrality index, an analogue of the traditional noncentrality parameter, on it. This leads to a method akin to the Neyman-Pearson lemma for constructing optimal kernels for specific alternatives. We then introduce a midpower analysis as a device for choosing optimal degrees of freedom for a family of alternatives of interest. Finally, we introduce a new diffusion kernel, called the Pearson-normal kernel, and study the extent to which the normal approximation to the power of tests based on this kernel is valid. Supplementary materials for this article are available online. PMID:24764609

  9. The quantitative properties of three soft X-ray flare kernels observed with the AS&E X-ray telescope on Skylab

    NASA Technical Reports Server (NTRS)

    Kahler, S. W.; Petrasso, R. D.; Kane, S. R.

    1976-01-01

    The physical parameters for the kernels of three solar X-ray flare events have been deduced using photographic data from the S-054 X-ray telescope on Skylab as the primary data source and 1-8 and 8-20 A fluxes from Solrad 9 as the secondary data source. The kernels had diameters of about 5-7 seconds of arc and in two cases electron densities at least as high as 0.3 trillion per cu cm. The lifetimes of the kernels were 5-10 min. The presence of thermal conduction during the decay phases is used to argue: (1) that kernels are entire, not small portions of, coronal loop structures, and (2) that flare heating must continue during the decay phase. We suggest a simple geometric model to explain the role of kernels in flares in which kernels are identified with emerging flux regions.

  10. Nonparametric methods for doubly robust estimation of continuous treatment effects.

    PubMed

    Kennedy, Edward H; Ma, Zongming; McHugh, Matthew D; Small, Dylan S

    2017-09-01

    Continuous treatments (e.g., doses) arise often in practice, but many available causal effect estimators are limited by either requiring parametric models for the effect curve, or by not allowing doubly robust covariate adjustment. We develop a novel kernel smoothing approach that requires only mild smoothness assumptions on the effect curve, and still allows for misspecification of either the treatment density or outcome regression. We derive asymptotic properties and give a procedure for data-driven bandwidth selection. The methods are illustrated via simulation and in a study of the effect of nurse staffing on hospital readmissions penalties.

  11. Dual-resolution dose assessments for proton beamlet using MCNPX 2.6.0

    NASA Astrophysics Data System (ADS)

    Chao, T. C.; Wei, S. C.; Wu, S. W.; Tung, C. J.; Tu, S. J.; Cheng, H. W.; Lee, C. C.

    2015-11-01

    The purpose of this study is to access proton dose distribution in dual resolution phantoms using MCNPX 2.6.0. The dual resolution phantom uses higher resolution in Bragg peak, area near large dose gradient, or heterogeneous interface and lower resolution in the rest. MCNPX 2.6.0 was installed in Ubuntu 10.04 with MPI for parallel computing. FMesh1 tallies were utilized to record the energy deposition which is a special designed tally for voxel phantoms that converts dose deposition from fluence. 60 and 120 MeV narrow proton beam were incident into Coarse, Dual and Fine resolution phantoms with pure water, water-bone-water and water-air-water setups. The doses in coarse resolution phantoms are underestimated owing to partial volume effect. The dose distributions in dual or high resolution phantoms agreed well with each other and dual resolution phantoms were at least 10 times more efficient than fine resolution one. Because the secondary particle range is much longer in air than in water, the dose of low density region may be under-estimated if the resolution or calculation grid is not small enough.

  12. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  13. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  14. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  15. 21 CFR 176.350 - Tamarind seed kernel powder.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  16. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... generally conforms to the “light” or “light amber” classification, that color classification may be used to... 7 Agriculture 2 2013-01-01 2013-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be...

  17. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... generally conforms to the “light” or “light amber” classification, that color classification may be used to... 7 Agriculture 2 2014-01-01 2014-01-01 false Kernel color classification. 51.1403 Section 51.1403... Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be...

  18. Nutrition quality of extraction mannan residue from palm kernel cake on brolier chicken

    NASA Astrophysics Data System (ADS)

    Tafsin, M.; Hanafi, N. D.; Kejora, E.; Yusraini, E.

    2018-02-01

    This study aims to find out the nutrient residue of palm kernel cake from mannan extraction on broiler chicken by evaluating physical quality (specific gravity, bulk density and compacted bulk density), chemical quality (proximate analysis and Van Soest Test) and biological test (metabolizable energy). Treatment composed of T0 : palm kernel cake extracted aquadest (control), T1 : palm kernel cake extracted acetic acid (CH3COOH) 1%, T2 : palm kernel cake extracted aquadest + mannanase enzyme 100 u/l and T3 : palm kernel cake extracted acetic acid (CH3COOH) 1% + enzyme mannanase 100 u/l. The results showed that mannan extraction had significant effect (P<0.05) in improving the quality of physical and numerically increase the value of crude protein and decrease the value of NDF (Neutral Detergent Fiber). Treatments had highly significant influence (P<0.01) on the metabolizable energy value of palm kernel cake residue in broiler chickens. It can be concluded that extraction with aquadest + enzyme mannanase 100 u/l yields the best nutrient quality of palm kernel cake residue for broiler chicken.

  19. Oil point and mechanical behaviour of oil palm kernels in linear compression

    NASA Astrophysics Data System (ADS)

    Kabutey, Abraham; Herak, David; Choteborsky, Rostislav; Mizera, Čestmír; Sigalingging, Riswanti; Akangbe, Olaosebikan Layi

    2017-07-01

    The study described the oil point and mechanical properties of roasted and unroasted bulk oil palm kernels under compression loading. The literature information available is very limited. A universal compression testing machine and vessel diameter of 60 mm with a plunger were used by applying maximum force of 100 kN and speed ranging from 5 to 25 mm min-1. The initial pressing height of the bulk kernels was measured at 40 mm. The oil point was determined by a litmus test for each deformation level of 5, 10, 15, 20, and 25 mm at a minimum speed of 5 mmmin-1. The measured parameters were the deformation, deformation energy, oil yield, oil point strain and oil point pressure. Clearly, the roasted bulk kernels required less deformation energy compared to the unroasted kernels for recovering the kernel oil. However, both kernels were not permanently deformed. The average oil point strain was determined at 0.57. The study is an essential contribution to pursuing innovative methods for processing palm kernel oil in rural areas of developing countries.

  20. Dynamic Changes in Phenolics and Antioxidant Capacity during Pecan (Carya illinoinensis) Kernel Ripening and Its Phenolics Profiles.

    PubMed

    Jia, Xiaodong; Luo, Huiting; Xu, Mengyang; Zhai, Min; Guo, Zhongren; Qiao, Yushan; Wang, Liangju

    2018-02-16

    Pecan ( Carya illinoinensis ) kernels have a high phenolics content and a high antioxidant capacity compared to other nuts-traits that have attracted great interest of late. Changes in the total phenolic content (TPC), condensed tannins (CT), total flavonoid content (TFC), five individual phenolics, and antioxidant capacity of five pecan cultivars were investigated during the process of kernel ripening. Ultra-performance liquid chromatography coupled with quadruple time-of-flight mass (UPLC-Q/TOF-MS) was also used to analyze the phenolics profiles in mixed pecan kernels. TPC, CT, TFC, individual phenolics, and antioxidant capacity were changed in similar patterns, with values highest at the water or milk stages, lowest at milk or dough stages, and slightly varied at kernel stages. Forty phenolics were tentatively identified in pecan kernels, of which two were first reported in the genus Carya , six were first reported in Carya illinoinensis , and one was first reported in its kernel. The findings on these new phenolic compounds provide proof of the high antioxidant capacity of pecan kernels.

  1. Multiscale Support Vector Learning With Projection Operator Wavelet Kernel for Nonlinear Dynamical System Identification.

    PubMed

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2016-02-03

    A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.

  2. Novel characterization method of impedance cardiography signals using time-frequency distributions.

    PubMed

    Escrivá Muñoz, Jesús; Pan, Y; Ge, S; Jensen, E W; Vallverdú, M

    2018-03-16

    The purpose of this document is to describe a methodology to select the most adequate time-frequency distribution (TFD) kernel for the characterization of impedance cardiography signals (ICG). The predominant ICG beat was extracted from a patient and was synthetized using time-frequency variant Fourier approximations. These synthetized signals were used to optimize several TFD kernels according to a performance maximization. The optimized kernels were tested for noise resistance on a clinical database. The resulting optimized TFD kernels are presented with their performance calculated using newly proposed methods. The procedure explained in this work showcases a new method to select an appropriate kernel for ICG signals and compares the performance of different time-frequency kernels found in the literature for the case of ICG signals. We conclude that, for ICG signals, the performance (P) of the spectrogram with either Hanning or Hamming windows (P = 0.780) and the extended modified beta distribution (P = 0.765) provided similar results, higher than the rest of analyzed kernels. Graphical abstract Flowchart for the optimization of time-frequency distribution kernels for impedance cardiography signals.

  3. Comparison of modeled estimates of inhalation exposure to aerosols during use of consumer spray products.

    PubMed

    Park, Jihoon; Yoon, Chungsik; Lee, Kiyoung

    2018-05-30

    In the field of exposure science, various exposure assessment models have been developed to complement experimental measurements; however, few studies have been published on their validity. This study compares the estimated inhaled aerosol doses of several inhalation exposure models to experimental measurements of aerosols released from consumer spray products, and then compares deposited doses within different parts of the human respiratory tract according to deposition models. Exposure models, including the European Center for Ecotoxicology of Chemicals Targeted Risk Assessment (ECETOC TRA), the Consumer Exposure Model (CEM), SprayExpo, ConsExpo Web and ConsExpo Nano, were used to estimate the inhaled dose under various exposure scenarios, and modeled and experimental estimates were compared. The deposited dose in different respiratory regions was estimated using the International Commission on Radiological Protection model and multiple-path particle dosimetry models under the assumption of polydispersed particles. The modeled estimates of the inhaled doses were accurate in the short term, i.e., within 10 min of the initial spraying, with a differences from experimental estimates ranging from 0 to 73% among the models. However, the estimates for long-term exposure, i.e., exposure times of several hours, deviated significantly from the experimental estimates in the absence of ventilation. The differences between the experimental and modeled estimates of particle number and surface area were constant over time under ventilated conditions. ConsExpo Nano, as a nano-scale model, showed stable estimates of short-term exposure, with a difference from the experimental estimates of less than 60% for all metrics. The deposited particle estimates were similar among the deposition models, particularly in the nanoparticle range for the head airway and alveolar regions. In conclusion, the results showed that the inhalation exposure models tested in this study are suitable for estimating short-term aerosol exposure (within half an hour), but not for estimating long-term exposure. Copyright © 2018 Elsevier GmbH. All rights reserved.

  4. Multiscale asymmetric orthogonal wavelet kernel for linear programming support vector learning and nonlinear dynamic systems identification.

    PubMed

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2014-05-01

    Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.

  5. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    PubMed

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  6. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    PubMed

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  7. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    PubMed

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  8. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies

    PubMed Central

    Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300

  9. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.

    PubMed

    Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.

  10. The effects of slice thickness and radiation dose level variations on computer-aided diagnosis (CAD) nodule detection performance in pediatric chest CT scans

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Lo, Pechin; Ghahremani, Shahnaz; Kim, Grace H.; Brown, Matthew S.; McNitt-Gray, Michael F.

    2017-03-01

    For pediatric oncology patients, CT scans are performed to assess treatment response and disease progression. CAD may be used to detect lung nodules which would reflect metastatic disease. The purpose of this study was to investigate the effects of reducing radiation dose and varying slice thickness on CAD performance in the detection of solid lung nodules in pediatric patients. The dataset consisted of CT scans of 58 pediatric chest cases, from which 7 cases had lung nodules detected by radiologist, and a total of 28 nodules were marked. For each case, the original raw data (sinogram data) was collected and a noise addition model was used to simulate reduced-dose scans of 50%, 25% and 10% of the original dose. In addition, the original and reduced-dose raw data were reconstructed at slice thicknesses of 1.5 and 3 mm using a medium sharp (B45) kernel; the result was eight datasets (4 dose levels x 2 thicknesses) for each case An in-house CAD tool was applied on all reconstructed scans, and results were compared with the radiologist's markings. Patient level mean sensitivities at 3mm thickness were 24%, 26%, 25%, 27%, and at 1.5 mm thickness were 23%, 29%, 35%, 36% for 10%, 25%, 50%, and 100% dose level, respectively. Mean FP numbers were 1.5, 0.9, 0.8, 0.7 at 3 mm and 11.4, 3.5, 2.8, 2.8 at 1.5 mm thickness for 10%, 25%, 50%, and 100% dose level respectively. CAD sensitivity did not change with dose level for 3mm thickness, but did change with dose for 1.5 mm. False Positives increased at low dose levels where noise values were high.

  11. Antioxidant and antimicrobial activities of bitter and sweet apricot (Prunus armeniaca L.) kernels.

    PubMed

    Yiğit, D; Yiğit, N; Mavi, A

    2009-04-01

    The present study describes the in vitro antimicrobial and antioxidant activity of methanol and water extracts of sweet and bitter apricot (Prunus armeniaca L.) kernels. The antioxidant properties of apricot kernels were evaluated by determining radical scavenging power, lipid peroxidation inhibition activity and total phenol content measured with a DPPH test, the thiocyanate method and the Folin method, respectively. In contrast to extracts of the bitter kernels, both the water and methanol extracts of sweet kernels have antioxidant potential. The highest percent inhibition of lipid peroxidation (69%) and total phenolic content (7.9 +/- 0.2 microg/mL) were detected in the methanol extract of sweet kernels (Hasanbey) and in the water extract of the same cultivar, respectively. The antimicrobial activities of the above extracts were also tested against human pathogenic microorganisms using a disc-diffusion method, and the minimal inhibitory concentration (MIC) values of each active extract were determined. The most effective antibacterial activity was observed in the methanol and water extracts of bitter kernels and in the methanol extract of sweet kernels against the Gram-positive bacteria Staphylococcus aureus. Additionally, the methanol extracts of the bitter kernels were very potent against the Gram-negative bacteria Escherichia coli (0.312 mg/mL MIC value). Significant anti-candida activity was also observed with the methanol extract of bitter apricot kernels against Candida albicans, consisting of a 14 mm in diameter of inhibition zone and a 0.625 mg/mL MIC value.

  12. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    PubMed

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p < 0.001) when compared with images reconstructed using the bone-sharpening kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p < 0.001, and 18.2%, p < 0.001, respectively) when compared with the image reconstructed by the bone-sharpening kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  13. Detecting peanuts inoculated with toxigenic and atoxienic Aspergillus flavus strains with fluorescence hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Xing, Fuguo; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Zhu, Fengle; Brown, Robert L.; Bhatnagar, Deepak; Liu, Yang

    2017-05-01

    Aflatoxin contamination in peanut products has been an important and long-standing problem around the world. Produced mainly by Aspergillus flavus and Aspergillus parasiticus, aflatoxins are the most toxic and carcinogenic compounds among toxins. This study investigated the application of fluorescence visible near-infrared (VNIR) hyperspectral images to assess the spectral difference between peanut kernels inoculated with toxigenic and atoxigenic inocula of A. flavus and healthy kernels. Peanut kernels were inoculated with NRRL3357, a toxigenic strain of A. flavus, and AF36, an atoxigenic strain of A. flavus, respectively. Fluorescence hyperspectral images under ultraviolet (UV) excitation were recorded on peanut kernels with and without skin. Contaminated kernels exhibited different fluorescence features compared with healthy kernels. For the kernels without skin, the inoculated kernels had a fluorescence peaks shifted to longer wavelengths with lower intensity than healthy kernels. In addition, the fluorescence intensity of peanuts without skin was higher than that of peanuts with skin (10 times). The fluorescence spectra of kernels with skin are significantly different from that of the control group (p<0.001). Furthermore, the fluorescence intensity of the toxigenic, AF3357 peanuts with skin was lower than that of the atoxigenic AF36 group. Discriminate analysis showed that the inoculation group can be separated from the controls with 100% accuracy. However, the two inoculation groups (AF3357 vis AF36) can be separated with only ∼80% accuracy. This study demonstrated the potential of fluorescence hyperspectral imaging techniques for screening of peanut kernels contaminated with A. flavus, which could potentially lead to the production of rapid and non-destructive scanning-based detection technology for the peanut industry.

  14. Effect of different ripening stages on walnut kernel quality: antioxidant activities, lipid characterization and antibacterial properties.

    PubMed

    Amin, Furheen; Masoodi, F A; Baba, Waqas N; Khan, Asma Ashraf; Ganie, Bashir Ahmad

    2017-11-01

    Packing tissue between and around the kernel halves just turning brown (PTB) is a phenological indicator of kernel ripening at harvest in walnuts. The effect of three ripening stages (Pre-PTB, PTB and Post-PTB) on kernel quality characteristics, mineral composition, lipid characterization, sensory analysis, antioxidant and antibacterial activity were investigated in fresh kernels of indigenous numbered walnut selection of Kashmir valley "SKAU-02". Proximate composition, physical properties and sensory analysis of walnut kernels showed better results for Pre-PTB and PTB while higher mineral content was seen for kernels at Post-PTB stage in comparison to other stages of ripening. Kernels showed significantly higher levels of Omega-3 PUFA (C18:3 n3 ) and low n6/n3 ratio when harvested at Pre-PTB and PTB stages. The highest phenolic content and antioxidant activity was observed at the first stage of ripening and a steady decrease was observed at later stages. TBARS values increased as ripening advanced but did not show any significant difference in malonaldehyde formation during early ripening stages whereas it showed marked increase in walnut kernels at post-PTB stage. Walnut extracts inhibited growth of Gram-positive bacteria ( B. cereus, B. subtilis, and S. aureus ) with respective MICs of 1, 1 and 5 mg/mL and gram negative bacteria ( E. coli, P. and K. pneumonia ) with MIC of 100 mg/mL. Zone of inhibition obtained against all the bacterial strains from walnut kernel extracts increased with increase in the stage of ripening. It is concluded that Pre-PTB harvest stage with higher antioxidant activities, better fatty acid profile and consumer acceptability could be preferred harvesting stage for obtaining functionally superior walnut kernels.

  15. Salt stress reduces kernel number of corn by inhibiting plasma membrane H+-ATPase activity.

    PubMed

    Jung, Stephan; Hütsch, Birgit W; Schubert, Sven

    2017-04-01

    Salt stress affects yield formation of corn (Zea mays L.) at various physiological levels resulting in an overall grain yield decrease. In this study we investigated how salt stress affects kernel development of two corn cultivars (cvs. Pioneer 3906 and Fabregas) at and shortly after pollination. In an earlier study, we found an accumulation of hexoses in the kernel tissue. Therefore, it was hypothesized that hexose uptake into developing endosperm and embryo might be inhibited. Hexoses are transported into the developing endosperm by carriers localized in the plasma membrane (PM). The transport is driven by the pH gradient which is built up by the PM H + -ATPase. It was investigated whether the PM H + -ATPase activity in developing corn kernels was inhibited by salt stress, which would cause a lower pH gradient resulting in impaired hexose import and finally in kernel abortion. Corn grown under control and salt stress conditions was harvested 0 and 2 days after pollination (DAP). Under salt stress sucrose and hexose concentrations in kernel tissue were higher 0 and 2 DAP. Kernel PM H + -ATPase activity was not affected at 0 DAP, but it was reduced at 2 DAP. This is in agreement with the finding, that kernel growth and thus kernel setting was not affected in the salt stress treatment at pollination, but it was reduced 2 days later. It is concluded that inhibition of PM H + -ATPase under salt stress impaired the energization of hexose transporters into the cells, resulting in lower kernel growth and finally in kernel abortion. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  16. Three-Dimensional Sensitivity Kernels of Z/H Amplitude Ratios of Surface and Body Waves

    NASA Astrophysics Data System (ADS)

    Bao, X.; Shen, Y.

    2017-12-01

    The ellipticity of Rayleigh wave particle motion, or Z/H amplitude ratio, has received increasing attention in inversion for shallow Earth structures. Previous studies of the Z/H ratio assumed one-dimensional (1D) velocity structures beneath the receiver, ignoring the effects of three-dimensional (3D) heterogeneities on wave amplitudes. This simplification may introduce bias in the resulting models. Here we present 3D sensitivity kernels of the Z/H ratio to Vs, Vp, and density perturbations, based on finite-difference modeling of wave propagation in 3D structures and the scattering-integral method. Our full-wave approach overcomes two main issues in previous studies of Rayleigh wave ellipticity: (1) the finite-frequency effects of wave propagation in 3D Earth structures, and (2) isolation of the fundamental mode Rayleigh waves from Rayleigh wave overtones and converted Love waves. In contrast to the 1D depth sensitivity kernels in previous studies, our 3D sensitivity kernels exhibit patterns that vary with azimuths and distances to the receiver. The laterally-summed 3D sensitivity kernels and 1D depth sensitivity kernels, based on the same homogeneous reference model, are nearly identical with small differences that are attributable to the single period of the 1D kernels and a finite period range of the 3D kernels. We further verify the 3D sensitivity kernels by comparing the predictions from the kernels with the measurements from numerical simulations of wave propagation for models with various small-scale perturbations. We also calculate and verify the amplitude kernels for P waves. This study shows that both Rayleigh and body wave Z/H ratios provide vertical and lateral constraints on the structure near the receiver. With seismic arrays, the 3D kernels afford a powerful tool to use the Z/H ratios to obtain accurate and high-resolution Earth models.

  17. Considering causal genes in the genetic dissection of kernel traits in common wheat.

    PubMed

    Mohler, Volker; Albrecht, Theresa; Castell, Adelheid; Diethelm, Manuela; Schweizer, Günther; Hartl, Lorenz

    2016-11-01

    Genetic factors controlling thousand-kernel weight (TKW) were characterized for their association with other seed traits, including kernel width, kernel length, ratio of kernel width to kernel length (KW/KL), kernel area, and spike number per m 2 (SN). For this purpose, a genetic map was established utilizing a doubled haploid population derived from a cross between German winter wheat cultivars Pamier and Format. Association studies in a diversity panel of elite cultivars supplemented genetic analysis of kernel traits. In both populations, genomic signatures of 13 candidate genes for TKW and kernel size were analyzed. Major quantitative trait loci (QTL) for TKW were identified on chromosomes 1B, 2A, 2D, and 4D, and their locations coincided with major QTL for kernel size traits, supporting the common belief that TKW is a function of other kernel traits. The QTL on chromosome 2A was associated with TKW candidate gene TaCwi-A1 and the QTL on chromosome 4D was associated with dwarfing gene Rht-D1. A minor QTL for TKW on chromosome 6B coincided with TaGW2-6B. The QTL for kernel dimensions that did not affect TKW were detected on eight chromosomes. A major QTL for KW/KL located at the distal tip of chromosome arm 5AS is being reported for the first time. TaSus1-7A and TaSAP-A1, closely linked to each other on chromosome 7A, could be related to a minor QTL for KW/KL. Genetic analysis of SN confirmed its negative correlation with TKW in this cross. In the diversity panel, TaSus1-7A was associated with TKW. Compared to the Pamier/Format bi-parental population where TaCwi-A1a was associated with higher TKW, the same allele reduced grain yield in the diversity panel, suggesting opposite effects of TaCwi-A1 on these two traits.

  18. Effect of Fungal Colonization of Wheat Grains with Fusarium spp. on Food Choice, Weight Gain and Mortality of Meal Beetle Larvae (Tenebrio molitor)

    PubMed Central

    Guo, Zhiqing; Döll, Katharina; Dastjerdi, Raana; Karlovsky, Petr; Dehne, Heinz-Wilhelm; Altincicek, Boran

    2014-01-01

    Species of Fusarium have significant agro-economical and human health-related impact by infecting diverse crop plants and synthesizing diverse mycotoxins. Here, we investigated interactions of grain-feeding Tenebrio molitor larvae with four grain-colonizing Fusarium species on wheat kernels. Since numerous metabolites produced by Fusarium spp. are toxic to insects, we tested the hypothesis that the insect senses and avoids Fusarium-colonized grains. We found that only kernels colonized with F. avenaceum or Beauveria bassiana (an insect-pathogenic fungal control) were avoided by the larvae as expected. Kernels colonized with F. proliferatum, F. poae or F. culmorum attracted T. molitor larvae significantly more than control kernels. The avoidance/preference correlated with larval feeding behaviors and weight gain. Interestingly, larvae that had consumed F. proliferatum- or F. poae-colonized kernels had similar survival rates as control. Larvae fed on F. culmorum-, F. avenaceum- or B. bassiana-colonized kernels had elevated mortality rates. HPLC analyses confirmed the following mycotoxins produced by the fungal strains on the kernels: fumonisins, enniatins and beauvericin by F. proliferatum, enniatins and beauvericin by F. poae, enniatins by F. avenaceum, and deoxynivalenol and zearalenone by F. culmorum. Our results indicate that T. molitor larvae have the ability to sense potential survival threats of kernels colonized with F. avenaceum or B. bassiana, but not with F. culmorum. Volatiles potentially along with gustatory cues produced by these fungi may represent survival threat signals for the larvae resulting in their avoidance. Although F. proliferatum or F. poae produced fumonisins, enniatins and beauvericin during kernel colonization, the larvae were able to use those kernels as diet without exhibiting increased mortality. Consumption of F. avenaceum-colonized kernels, however, increased larval mortality; these kernels had higher enniatin levels than F. proliferatum or F. poae-colonized ones suggesting that T. molitor can tolerate or metabolize those toxins. PMID:24932485

  19. Coronary Stent Artifact Reduction with an Edge-Enhancing Reconstruction Kernel - A Prospective Cross-Sectional Study with 256-Slice CT.

    PubMed

    Tan, Stéphanie; Soulez, Gilles; Diez Martinez, Patricia; Larrivée, Sandra; Stevens, Louis-Mathieu; Goussard, Yves; Mansour, Samer; Chartrand-Lefebvre, Carl

    2016-01-01

    Metallic artifacts can result in an artificial thickening of the coronary stent wall which can significantly impair computed tomography (CT) imaging in patients with coronary stents. The objective of this study is to assess in vivo visualization of coronary stent wall and lumen with an edge-enhancing CT reconstruction kernel, as compared to a standard kernel. This is a prospective cross-sectional study involving the assessment of 71 coronary stents (24 patients), with blinded observers. After 256-slice CT angiography, image reconstruction was done with medium-smooth and edge-enhancing kernels. Stent wall thickness was measured with both orthogonal and circumference methods, averaging thickness from diameter and circumference measurements, respectively. Image quality was assessed quantitatively using objective parameters (noise, signal to noise (SNR) and contrast to noise (CNR) ratios), as well as visually using a 5-point Likert scale. Stent wall thickness was decreased with the edge-enhancing kernel in comparison to the standard kernel, either with the orthogonal (0.97 ± 0.02 versus 1.09 ± 0.03 mm, respectively; p<0.001) or the circumference method (1.13 ± 0.02 versus 1.21 ± 0.02 mm, respectively; p = 0.001). The edge-enhancing kernel generated less overestimation from nominal thickness compared to the standard kernel, both with the orthogonal (0.89 ± 0.19 versus 1.00 ± 0.26 mm, respectively; p<0.001) and the circumference (1.06 ± 0.26 versus 1.13 ± 0.31 mm, respectively; p = 0.005) methods. The edge-enhancing kernel was associated with lower SNR and CNR, as well as higher background noise (all p < 0.001), in comparison to the medium-smooth kernel. Stent visual scores were higher with the edge-enhancing kernel (p<0.001). In vivo 256-slice CT assessment of coronary stents shows that the edge-enhancing CT reconstruction kernel generates thinner stent walls, less overestimation from nominal thickness, and better image quality scores than the standard kernel.

  20. Temporal Effects on Internal Fluorescence Emissions Associated with Aflatoxin Contamination from Corn Kernel Cross-Sections Inoculated with Toxigenic and Atoxigenic Aspergillus flavus.

    PubMed

    Hruska, Zuzana; Yao, Haibo; Kincaid, Russell; Brown, Robert L; Bhatnagar, Deepak; Cleveland, Thomas E

    2017-01-01

    Non-invasive, easy to use and cost-effective technology offers a valuable alternative for rapid detection of carcinogenic fungal metabolites, namely aflatoxins, in commodities. One relatively recent development in this area is the use of spectral technology. Fluorescence hyperspectral imaging, in particular, offers a potential rapid and non-invasive method for detecting the presence of aflatoxins in maize infected with the toxigenic fungus Aspergillus flavus . Earlier studies have shown that whole maize kernels contaminated with aflatoxins exhibit different spectral signatures from uncontaminated kernels based on the external fluorescence emission of the whole kernels. Here, the effect of time on the internal fluorescence spectral emissions from cross-sections of kernels infected with toxigenic and atoxigenic A. flavus , were examined in order to elucidate the interaction between the fluorescence signals emitted by some aflatoxin contaminated maize kernels and the fungal invasion resulting in the production of aflatoxins. First, the difference in internal fluorescence emissions between cross-sections of kernels incubated in toxigenic and atoxigenic inoculum was assessed. Kernels were inoculated with each strain for 5, 7, and 9 days before cross-sectioning and imaging. There were 270 kernels (540 halves) imaged, including controls. Second, in a different set of kernels (15 kernels/group; 135 total), the germ of each kernel was separated from the endosperm to determine the major areas of aflatoxin accumulation and progression over nine growth days. Kernels were inoculated with toxigenic and atoxigenic fungal strains for 5, 7, and 9 days before the endosperm and germ were separated, followed by fluorescence hyperspectral imaging and chemical aflatoxin determination. A marked difference in fluorescence intensity was shown between the toxigenic and atoxigenic strains on day nine post-inoculation, which may be a useful indicator of the location of aflatoxin contamination. This finding suggests that both, the fluorescence peak shift and intensity as well as timing, may be essential in distinguishing toxigenic and atoxigenic fungi based on spectral features. Results also reveal a possible preferential difference in the internal colonization of maize kernels between the toxigenic and atoxigenic strains of A. flavus suggesting a potential window for differentiating the strains based on fluorescence spectra at specific time points.

  1. Temporal Effects on Internal Fluorescence Emissions Associated with Aflatoxin Contamination from Corn Kernel Cross-Sections Inoculated with Toxigenic and Atoxigenic Aspergillus flavus

    PubMed Central

    Hruska, Zuzana; Yao, Haibo; Kincaid, Russell; Brown, Robert L.; Bhatnagar, Deepak; Cleveland, Thomas E.

    2017-01-01

    Non-invasive, easy to use and cost-effective technology offers a valuable alternative for rapid detection of carcinogenic fungal metabolites, namely aflatoxins, in commodities. One relatively recent development in this area is the use of spectral technology. Fluorescence hyperspectral imaging, in particular, offers a potential rapid and non-invasive method for detecting the presence of aflatoxins in maize infected with the toxigenic fungus Aspergillus flavus. Earlier studies have shown that whole maize kernels contaminated with aflatoxins exhibit different spectral signatures from uncontaminated kernels based on the external fluorescence emission of the whole kernels. Here, the effect of time on the internal fluorescence spectral emissions from cross-sections of kernels infected with toxigenic and atoxigenic A. flavus, were examined in order to elucidate the interaction between the fluorescence signals emitted by some aflatoxin contaminated maize kernels and the fungal invasion resulting in the production of aflatoxins. First, the difference in internal fluorescence emissions between cross-sections of kernels incubated in toxigenic and atoxigenic inoculum was assessed. Kernels were inoculated with each strain for 5, 7, and 9 days before cross-sectioning and imaging. There were 270 kernels (540 halves) imaged, including controls. Second, in a different set of kernels (15 kernels/group; 135 total), the germ of each kernel was separated from the endosperm to determine the major areas of aflatoxin accumulation and progression over nine growth days. Kernels were inoculated with toxigenic and atoxigenic fungal strains for 5, 7, and 9 days before the endosperm and germ were separated, followed by fluorescence hyperspectral imaging and chemical aflatoxin determination. A marked difference in fluorescence intensity was shown between the toxigenic and atoxigenic strains on day nine post-inoculation, which may be a useful indicator of the location of aflatoxin contamination. This finding suggests that both, the fluorescence peak shift and intensity as well as timing, may be essential in distinguishing toxigenic and atoxigenic fungi based on spectral features. Results also reveal a possible preferential difference in the internal colonization of maize kernels between the toxigenic and atoxigenic strains of A. flavus suggesting a potential window for differentiating the strains based on fluorescence spectra at specific time points. PMID:28966606

  2. The energy dependence of the lateral dose response functions of detectors with various densities in photon-beam dosimetry.

    PubMed

    Looe, Hui Khee; Harder, Dietrich; Poppe, Björn

    2017-02-07

    The lateral dose response function is a general characteristic of the volume effect of a detector used for photon dosimetry in a water phantom. It serves as the convolution kernel transforming the true absorbed dose to water profile, which would be produced within the undisturbed water phantom, into the detector-measured signal profile. The shape of the lateral dose response function characterizes (i) the volume averaging attributable to the detector's size and (ii) the disturbance of the secondary electron field associated with the deviation of the electron density of the detector material from the surrounding water. In previous work, the characteristic dependence of the shape of the lateral dose response function upon the electron density of the detector material was studied for 6 MV photons by Monte Carlo simulation of a wall-less voxel-sized detector (Looe et al 2015 Phys. Med. Biol. 60 6585-07). This study is here continued for 60 Co gamma rays and 15 MV photons in comparison with 6 MV photons. It is found (1) that throughout these photon spectra the shapes of the lateral dose response functions are retaining their characteristic dependence on the detector's electron density, and (2) that their energy-dependent changes are only moderate. This appears as a practical advantage because the lateral dose response function can then be treated as practically invariant across a clinical photon beam in spite of the known changes of the photon spectrum with increasing distance from the beam axis.

  3. Energy deposition at the bone-tissue interface from nuclear fragments produced by high-energy nucleons

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Hajnal, Ferenc; Wilson, John W.

    1990-01-01

    The transport of nuclear fragmentation recoils produced by high-energy nucleons in the region of the bone-tissue interface is considered. Results for the different flux and absorbed dose for recoils produced by 1 GeV protons are presented in a bidirectional transport model. The energy deposition in marrow cavities is seen to be enhanced by recoils produced in bone. Approximate analytic formulae for absorbed dose near the interface region are also presented for a simplified range-energy model.

  4. High resolution digital autoradiographic and dosimetric analysis of heterogeneous radioactivity distribution in xenografted prostate tumors.

    PubMed

    Timmermand, Oskar V; Nilsson, Jenny; Strand, Sven-Erik; Elgqvist, Jörgen

    2016-12-01

    The first main aim of this study was to illustrate the absorbed dose rate distribution from 177 Lu in sections of xenografted prostate cancer (PCa) tumors using high resolution digital autoradiography (DAR) and compare it with hypothetical identical radioactivity distributions of 90 Y or 7 MeV alpha-particles. Three dosimetry models based on either dose point kernels or Monte Carlo simulations were used and evaluated. The second and overlapping aim, was to perform DAR imaging and dosimetric analysis of the distribution of radioactivity, and hence the absorbed dose rate, in tumor sections at an early time point after injection during radioimmunotherapy using 177 Lu-h11B6, directed against the human kallikrein 2 antigen. Male immunodeficient BALB/c nude mice, aged 6-8 w, were inoculated by subcutaneous injection of ∼10 7 LNCaP cells in a 200 μl suspension of a 1:1 mixture of medium and Matrigel. The antibody h11B6 was conjugated with the chelator CHX-A″-DTPA after which conjugated h11B6 was mixed with 177 LuCl 3 . The incubation was performed at room temperature for 2 h, after which the labeling was terminated and the solution was purified on a NAP-5 column. About 20 MBq 177 Lu-h11B6 was injected intravenously in the tail vein. At approximately 10 h postinjection (hpi), the mice were sacrificed and one tumor was collected from each of the five animals and cryosectioned into 10 μm thick slices. The tumor slices were measured and imaged using the DAR MicroImager system and the M3Vision software. Then the absorbed dose rate was calculated using a dose point kernel generated with the Monte Carlo code gate v7.0. The DAR system produced high resolution images of the radioactivity distribution, close to the resolution of single PCa cells. The DAR images revealed a pronounced heterogeneous radioactivity distribution, i.e., count rate per area, in the tumors, indicated by the normalized intensity variations along cross sections as mean ± SD: 0.15 ± 0.15, 0.20 ± 0.18, 0.12 ± 0.17, 0.15 ± 0.16, and 0.23 ± 0.22, for each tumor section, respectively. The absorbed dose rate distribution for 177 Lu at the time of dissection 10 hpi showed a maximum value of 2.9 ± 0.4 Gy/h (mean ± SD), compared to 6.0 ± 0.9 and 159 ± 25 Gy/h for the hypothetical 90 Y and 7 MeV alpha-particle cases assuming the same count rate densities. Mean absorbed dose rate values were 0.13, 0.53, and 6.43 Gy/h for 177 Lu, 90 Y, and alpha-particles, respectively. The initial uptake of 177 Lu-h11B6 produces a high absorbed dose rate, which is important for a successful therapeutic outcome. The hypothetical 90 Y case indicates a less heterogeneous absorbed dose rate distribution and a higher mean absorbed dose rate compared to 177 Lu, although with a potentially increased irradiation of surrounding healthy tissue. The hypothetical alpha-particle case indicates the possibility of a higher maximum absorbed dose rate, although with a more heterogeneous absorbed dose rate distribution.

  5. Endotoxin-induced intravascular coagulation in rabbits: effect of tissue plasminogen activator vs urokinase of PAI generation, fibrin deposits and mortality.

    PubMed

    Paloma, M J; Páramo, J A; Rocha, E

    1995-12-01

    We have evaluated the effect of plasminogen activators (t-PA and urokinase) on an experimental model of disseminated intravascular coagulation (DIC) in rabbits by injection of 20 micrograms/kg/h of E. coli lipopolysaccharide during 6 h t-PA (0.2 mg/kg and 0.7 mg/kg), urokinase (3000 U/kg/h) and saline (control) were given simultaneously with endotoxin. Results indicated that urokinase and low dose of t-PA significantly reduced the increase of plasminogen activator inhibitor (PAI) activity observed 2 h after endotoxin (p < 0.001). High t-PA dose also diminished the PAI levels at 6 h (p < 0.001). A significant reduction of fibrin deposits in kidneys was observed din both t-PA treated groups as compared with findings in the group of rabbits infused with saline solution (p < 0.005), whereas urokinase had no significant effect on the extent of fibrin deposition. Finally, the mortality rate in the control group (70%) was reduced to 50% in rabbits receiving high doses of t-PA. In conclusion, treatment with t-PA resulted in reduced PAI generation, fibrin deposits and mortality in endotoxin-treated rabbits.

  6. The preclinical set-up at the ID17 biomedical beamline to achieve high local dose deposition using interlaced microbeams

    NASA Astrophysics Data System (ADS)

    Bräuer-Krisch, E.; Nemoz, C.; Brochard, Th; Berruyer, G.; Renier, M.; Pouyatos, B.; Serduc, R.

    2013-03-01

    Microbeam Radiation Therapy (MRT) uses spatially a fractionated "white beam" (energies 50-350 keV) irradiation from a Synchrotron Source. The typical microbeams used at ID17 are 25-100μm-thick, spaced by 200-400μm, and carry extremely high dose rates (up to about 16 kGy/s). These microbeams are well tolerated by biological tissue, i.e. up to several hundred of Gy in the peaks. When valley doses, caused by Compton scattering in between two microbeams, remain within a dose regime similar to conventional RT, a superior tumour control can be achieved with MRT than with conventional RT. The normal tissue tolerance of these microscopically small beams is outstanding and well documented in the literature. The hypothesis of a differential effect in particular on the vasculature of normal versus tumoral tissue might best be proven by using large animal models with spontaneous tumors instead of small laboratory animals with transplantable tumors, an ongoing project on ID17. An alternative approach to deposit a high dose, while preserving the feature of the spatial separation of these microbeams outside the target has opened up new applications in preclinical research. The instrumentation of this method to produce such interlaced beams is presented with an outlook on the challenges to build a treatment platform for human patients. Dose measurements using Gafchromic films exposed in interlaced geometries with their steep profiles highlight the potential to deposit radiotoxic doses in the vicinity of radiosensitive tissues.

  7. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov Websites

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  8. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... the Act, are as follows: Common name Botanical name of plant source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed Cydonia oblonga Miller. [42 FR 14640, Mar...

  9. 7 CFR 51.2954 - Tolerances for grade defects.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... chart. Tolerances for Grade Defects Grade External (shell) defects Internal (kernel) defects Color of kernel U.S. No. 1. 10 pct, by count for splits. 5 pct. by count, for other shell defects, including not... tolerance to reduce the required 70 pct of “light amber” kernels or the required 40 pct of “light” kernels...

  10. 7 CFR 51.2284 - Size classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...: “Halves”, “Pieces and Halves”, “Pieces” or “Small Pieces”. The size of portions of kernels in the lot... consists of 85 percent or more, by weight, half kernels, and the remainder three-fourths half kernels. (See § 51.2285.) (b) Pieces and halves. Lot consists of 20 percent or more, by weight, half kernels, and the...

  11. Measurements of oleic acid among individual kernels harvested from test plots of purified runner and spanish high oleic seed

    USDA-ARS?s Scientific Manuscript database

    Normal oleic peanuts are often found within commercial lots of high oleic peanuts when sampling among individual kernels. Kernels not meeting high oleic threshold could be true contamination with normal oleic peanuts introduced via poor handling, or kernels not meeting threshold could be immature a...

  12. THERMOS. 30-Group ENDF/B Scattered Kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrosson, F.J.; Finch, D.R.

    1973-12-01

    These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code. To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tapemore » library from which they may be retrieved easily for use in any 30-group THERMOS library.« less

  13. Novel multi-functional europium-doped gadolinium oxide nanoparticle aerosols facilitate the study of deposition in the developing rat lung

    NASA Astrophysics Data System (ADS)

    Das, Gautom K.; Anderson, Donald S.; Wallis, Chris D.; Carratt, Sarah A.; Kennedy, Ian M.; van Winkle, Laura S.

    2016-06-01

    Ambient ultrafine particulate matter (UPM), less than 100 nm in size, has been linked to the development and exacerbation of pulmonary diseases. Age differences in susceptibility to UPM may be due to a difference in delivered dose as well as age-dependent differences in lung biology and clearance. In this study, we developed and characterized aerosol exposures to novel metal oxide nanoparticles containing lanthanides to study particle deposition in the developing postnatal rat lung. Neonatal, juvenile and adult rats (1, 3 and 12 weeks old) were nose only exposed to 380 μg m-3 of ~30 nm europium doped gadolinium oxide nanoparticles (Gd2O3:Eu3+) for 1 h. The deposited dose in the nose, extrapulmonary airways and lungs was determined using inductively-coupled plasma mass spectroscopy. The dose of deposited particles was significantly greater in the juvenile rats at 2.22 ng per g body weight compared to 1.47 ng per g and 0.097 ng per g for the adult and neonate rats, respectively. Toxicity was investigated in bronchoalveolar lavage fluid (BALF) by quantifying recovered cell types, and measuring lactate dehydrogenase activity and total protein. The toxicity data suggests that the lanthanide particles were not acutely toxic or inflammatory with no increase in neutrophils or lactate dehydrogenase activity at any age. Juvenile and adult rats had the same mass of deposited NPs per gram of lung tissue, while neonatal rats had significantly less NPs deposited per gram of lung tissue. The current study demonstrates the utility of novel lanthanide-based nanoparticles to study inhaled particle deposition in vivo and has important implications for nanoparticles delivery to the developing lung either as therapies or as a portion of particulate matter air pollution.Ambient ultrafine particulate matter (UPM), less than 100 nm in size, has been linked to the development and exacerbation of pulmonary diseases. Age differences in susceptibility to UPM may be due to a difference in delivered dose as well as age-dependent differences in lung biology and clearance. In this study, we developed and characterized aerosol exposures to novel metal oxide nanoparticles containing lanthanides to study particle deposition in the developing postnatal rat lung. Neonatal, juvenile and adult rats (1, 3 and 12 weeks old) were nose only exposed to 380 μg m-3 of ~30 nm europium doped gadolinium oxide nanoparticles (Gd2O3:Eu3+) for 1 h. The deposited dose in the nose, extrapulmonary airways and lungs was determined using inductively-coupled plasma mass spectroscopy. The dose of deposited particles was significantly greater in the juvenile rats at 2.22 ng per g body weight compared to 1.47 ng per g and 0.097 ng per g for the adult and neonate rats, respectively. Toxicity was investigated in bronchoalveolar lavage fluid (BALF) by quantifying recovered cell types, and measuring lactate dehydrogenase activity and total protein. The toxicity data suggests that the lanthanide particles were not acutely toxic or inflammatory with no increase in neutrophils or lactate dehydrogenase activity at any age. Juvenile and adult rats had the same mass of deposited NPs per gram of lung tissue, while neonatal rats had significantly less NPs deposited per gram of lung tissue. The current study demonstrates the utility of novel lanthanide-based nanoparticles to study inhaled particle deposition in vivo and has important implications for nanoparticles delivery to the developing lung either as therapies or as a portion of particulate matter air pollution. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00897f

  14. Novel multi-functional europium-doped gadolinium oxide nanoparticle aerosols facilitate the study of deposition in the developing rat lung.

    PubMed

    Das, Gautom K; Anderson, Donald S; Wallis, Chris D; Carratt, Sarah A; Kennedy, Ian M; Van Winkle, Laura S

    2016-06-02

    Ambient ultrafine particulate matter (UPM), less than 100 nm in size, has been linked to the development and exacerbation of pulmonary diseases. Age differences in susceptibility to UPM may be due to a difference in delivered dose as well as age-dependent differences in lung biology and clearance. In this study, we developed and characterized aerosol exposures to novel metal oxide nanoparticles containing lanthanides to study particle deposition in the developing postnatal rat lung. Neonatal, juvenile and adult rats (1, 3 and 12 weeks old) were nose only exposed to 380 μg m(-3) of ∼30 nm europium doped gadolinium oxide nanoparticles (Gd2O3:Eu(3+)) for 1 h. The deposited dose in the nose, extrapulmonary airways and lungs was determined using inductively-coupled plasma mass spectroscopy. The dose of deposited particles was significantly greater in the juvenile rats at 2.22 ng per g body weight compared to 1.47 ng per g and 0.097 ng per g for the adult and neonate rats, respectively. Toxicity was investigated in bronchoalveolar lavage fluid (BALF) by quantifying recovered cell types, and measuring lactate dehydrogenase activity and total protein. The toxicity data suggests that the lanthanide particles were not acutely toxic or inflammatory with no increase in neutrophils or lactate dehydrogenase activity at any age. Juvenile and adult rats had the same mass of deposited NPs per gram of lung tissue, while neonatal rats had significantly less NPs deposited per gram of lung tissue. The current study demonstrates the utility of novel lanthanide-based nanoparticles to study inhaled particle deposition in vivo and has important implications for nanoparticles delivery to the developing lung either as therapies or as a portion of particulate matter air pollution.

  15. Treatment with proteolytic enzymes decreases glomerular immune complex deposits in passive serum sickness in rats and mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emancipator, S.N.; Nakazawa, M.; Lamm, M.E.

    1986-03-05

    This study assessed the effect of protease treatment on glomerular immune complex (IC) deposition in passive serum sickness. IC containing 2.2 mg of specific rabbit antibovine gammaglobulin (Ab) and cationic bovine gammaglobulin (CBGG) at 5-fold antigen excess were given via tail vein to 140 g Sprague-Dawley rats; some rats received IC containing /sup 125/I-Ab. After maximal glomerular IC deposition (1h) a single intravenous dose of either 4 mg chymopapain plus 2 mg subtilisin (T), or saline (C) was given. By immunofluorescence (IF) 1 h later, 1/13 T rats had bright capillary wall deposits of CBGG vs 10/11 C rats (x/supmore » 2/ = 13.4, p < .001); 6/13 T rats had Ab vs. 10/11 C rats (x/sup 2/ = 4.05, p < .05). Isolated glomeruli from T rats given /sup 125/I-IC had 25% less Ab (3267 +/- 293 cpm/mg glomerular protein) than C rats (4327 +/- 530, p < .005). 20 g BALB/c mice given IC with CBGG and 0.3 mg Ab, or IC with native BGG (nBGG) and 1 mg Ab via tail vein received 0.5 mg chymopapain and 0.25 mg subtilisin in 5 divided intraperitoneal doses q 10 min beginning 1 h later. 20 min after the last dose, 2/15 T mice given CBGG-IC had capillary wall Ab deposits by IF vs 13/16 C mice (x/sup 2/ = 11.7, p < .001). 1/16 T mice given nBGG-IC had mesangial Ab deposits vs. 11/15 C mice (x/sup 2/ = 10.8, p < .001). The authors conclude that protease treatment can remove glomerular IC deposits.« less

  16. An electron-beam dose deposition experiment: TIGER 1-D simulation code versus thermoluminescent dosimetry

    NASA Astrophysics Data System (ADS)

    Murrill, Steven R.; Tipton, Charles W.; Self, Charles T.

    1991-03-01

    The dose absorbed in an integrated circuit (IC) die exposed to a pulse of low-energy electrons is a strong function of both electron energy and surrounding packaging materials. This report describes an experiment designed to measure how well the Integrated TIGER Series one-dimensional (1-D) electron transport simulation program predicts dose correction factors for a state-of-the-art IC package and package/printed circuit board (PCB) combination. These derived factors are compared with data obtained experimentally using thermoluminescent dosimeters (TLD's) and the FX-45 flash x-ray machine (operated in electron-beam (e-beam) mode). The results of this experiment show that the TIGER 1-D simulation code can be used to accurately predict FX-45 e-beam dose deposition correction factors for reasonably complex IC packaging configurations.

  17. Development and application of a complex numerical model and software for the computation of dose conversion factors for radon progenies.

    PubMed

    Farkas, Árpád; Balásházy, Imre

    2015-04-01

    A more exact determination of dose conversion factors associated with radon progeny inhalation was possible due to the advancements in epidemiological health risk estimates in the last years. The enhancement of computational power and the development of numerical techniques allow computing dose conversion factors with increasing reliability. The objective of this study was to develop an integrated model and software based on a self-developed airway deposition code, an own bronchial dosimetry model and the computational methods accepted by International Commission on Radiological Protection (ICRP) to calculate dose conversion coefficients for different exposure conditions. The model was tested by its application for exposure and breathing conditions characteristic of mines and homes. The dose conversion factors were 8 and 16 mSv WLM(-1) for homes and mines when applying a stochastic deposition model combined with the ICRP dosimetry model (named PM-A model), and 9 and 17 mSv WLM(-1) when applying the same deposition model combined with authors' bronchial dosimetry model and the ICRP bronchiolar and alveolar-interstitial dosimetry model (called PM-B model). User friendly software for the computation of dose conversion factors has also been developed. The software allows one to compute conversion factors for a large range of exposure and breathing parameters and to perform sensitivity analyses. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. 7 CFR 810.2003 - Basis of determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Basis of determination. Each determination of heat-damaged kernels, damaged kernels, material other than... shrunken and broken kernels. Other determinations not specifically provided for under the general...

  19. Stochastic simulation of radium-223 dichloride therapy at the sub-cellular level

    NASA Astrophysics Data System (ADS)

    Gholami, Y.; Zhu, X.; Fulton, R.; Meikle, S.; El-Fakhri, G.; Kuncic, Z.

    2015-08-01

    Radium-223 dichloride (223Ra) is an alpha particle emitter and a natural bone-seeking radionuclide that is currently used for treating osteoblastic bone metastases associated with prostate cancer. The stochastic nature of alpha emission, hits and energy deposition poses some challenges for estimating radiation damage. In this paper we investigate the distribution of hits to cells by multiple alpha particles corresponding to a typical clinically delivered dose using a Monte Carlo model to simulate the stochastic effects. The number of hits and dose deposition were recorded in the cytoplasm and nucleus of each cell. Alpha particle tracks were also visualized. We found that the stochastic variation in dose deposited in cell nuclei (≃ 40%) can be attributed in part to the variation in LET with pathlength. We also found that ≃ 18% of cell nuclei receive less than one sigma below the average dose per cell (≃ 15.4 Gy). One possible implication of this is that the efficacy of cell kill in alpha particle therapy need not rely solely on ionization clustering on DNA but possibly also on indirect DNA damage through the production of free radicals and ensuing intracellular signaling.

  20. Impact of azadirachtin, an insecticidal allelochemical from neem on soil microflora, enzyme and respiratory activities.

    PubMed

    Gopal, Murali; Gupta, Alka; Arunachalam, V; Magu, S P

    2007-11-01

    The effect of 10% azadirachtin granules (alcoholic extract of neem seed kernel mixed with China clay) was studied on the population of bacteria, actinomycetes, fungi, Azotobacter and nitrifying bacteria; soil dehydrogenase, phosphatase and respiratory activities on 0, 15th, 30th, 60th and 90th days after application in sandy loam soil collected from the fields. It was observed that baring the Azotobacter sp., azadirachtin at all the doses exerted a suppressive effect on the rest of the microbial communities and enzyme activities in the initial 15 day period. The population of bacteria, actinomycetes besides phosphatase and respiratory activities recovered after 60th day and subsequently increased significantly. The fungi and nitrifiers were most sensitive groups as their numbers were reduced significantly throughout the studies. The two times and five times recommended dose of azadirachtin had very high biocidal effects on the soil microorganisms and its activities. However, analysis of the data by the Shannon Weaver index showed that azadirachtin reduces both the form and functional microbial diversity at all doses.

  1. Caffeine Increases the Linearity of the Visual BOLD Response

    PubMed Central

    Liu, Thomas T.; Liau, Joy

    2009-01-01

    Although the blood oxygenation level dependent (BOLD) signal used in most functional magnetic resonance imaging (fMRI) studies has been shown to exhibit nonlinear characteristics, most analyses assume that the BOLD signal responds in a linear fashion to stimulus. This assumption of linearity can lead to errors in the estimation of the BOLD response, especially for rapid event-related fMRI studies. In this study, we used a rapid event-related design and Volterra kernel analysis to assess the effect of a 200 mg oral dose of caffeine on the linearity of the visual BOLD response. The caffeine dose significantly (p < 0.02) increased the linearity of the BOLD response in a sample of 11 healthy volunteers studied on a 3 Tesla MRI system. In addition, the agreement between nonlinear and linear estimates of the hemodynamic response function was significantly increased (p= 0.013) with the caffeine dose. These findings indicate that differences in caffeine usage should be considered as a potential source of bias in the analysis of rapid event-related fMRI studies. PMID:19854278

  2. Modeling Deposition of Inhaled Particles

    EPA Science Inventory

    The mathematical modeling of the deposition and distribution of inhaled aerosols within human lungs is an invaluable tool in predicting both the health risks associated with inhaled environmental aerosols and the therapeutic dose delivered by inhaled pharmacological drugs. Howeve...

  3. DNA sequence+shape kernel enables alignment-free modeling of transcription factor binding.

    PubMed

    Ma, Wenxiu; Yang, Lin; Rohs, Remo; Noble, William Stafford

    2017-10-01

    Transcription factors (TFs) bind to specific DNA sequence motifs. Several lines of evidence suggest that TF-DNA binding is mediated in part by properties of the local DNA shape: the width of the minor groove, the relative orientations of adjacent base pairs, etc. Several methods have been developed to jointly account for DNA sequence and shape properties in predicting TF binding affinity. However, a limitation of these methods is that they typically require a training set of aligned TF binding sites. We describe a sequence + shape kernel that leverages DNA sequence and shape information to better understand protein-DNA binding preference and affinity. This kernel extends an existing class of k-mer based sequence kernels, based on the recently described di-mismatch kernel. Using three in vitro benchmark datasets, derived from universal protein binding microarrays (uPBMs), genomic context PBMs (gcPBMs) and SELEX-seq data, we demonstrate that incorporating DNA shape information improves our ability to predict protein-DNA binding affinity. In particular, we observe that (i) the k-spectrum + shape model performs better than the classical k-spectrum kernel, particularly for small k values; (ii) the di-mismatch kernel performs better than the k-mer kernel, for larger k; and (iii) the di-mismatch + shape kernel performs better than the di-mismatch kernel for intermediate k values. The software is available at https://bitbucket.org/wenxiu/sequence-shape.git. rohs@usc.edu or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  4. Environmental consequences of postulated plutonium releases from Westinghouse PFDL, Cheswick, Pennsylvania, as a result of severe natural phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McPherson, R.B.; Watson, E.C.

    1979-06-01

    Potential environmental consequences in terms of radiation dose to people are presented for postulated accidents due to earthquakes, tornadoes, high straight-line winds, and floods. Maximum plutonium deposition values are given for significant locations around the site. All important potential exposure pathways are examined. The most likely calculated 50-year collective committed dose equivalents are all much lower than the collective dose equivalent expected from 50 years of exposure to natural background radiation and medical x-rays except Earthquake No. 4 and the 260-mph tornado. The most likely maximum residual plutonium contamination estimated to be deposited offsite following Earthquake No. 4, and themore » 200-mph and 260-mph tornadoes are above the Environmental Protection Agency's (EPA) proposed guideline for plutonium in the general environment of 0.2 ..mu..Ci/m/sup 2/. The deposition values following the other severe natural phenomena are below the EPA proposed guideline.« less

  5. Searching Remote Homology with Spectral Clustering with Symmetry in Neighborhood Cluster Kernels

    PubMed Central

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of “recent” paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. Contact: sarkar@labri.fr. PMID:23457439

  6. Fine-mapping of qGW4.05, a major QTL for kernel weight and size in maize.

    PubMed

    Chen, Lin; Li, Yong-xiang; Li, Chunhui; Wu, Xun; Qin, Weiwei; Li, Xin; Jiao, Fuchao; Zhang, Xiaojing; Zhang, Dengfeng; Shi, Yunsu; Song, Yanchun; Li, Yu; Wang, Tianyu

    2016-04-12

    Kernel weight and size are important components of grain yield in cereals. Although some information is available concerning the map positions of quantitative trait loci (QTL) for kernel weight and size in maize, little is known about the molecular mechanisms of these QTLs. qGW4.05 is a major QTL that is associated with kernel weight and size in maize. We combined linkage analysis and association mapping to fine-map and identify candidate gene(s) at qGW4.05. QTL qGW4.05 was fine-mapped to a 279.6-kb interval in a segregating population derived from a cross of Huangzaosi with LV28. By combining the results of regional association mapping and linkage analysis, we identified GRMZM2G039934 as a candidate gene responsible for qGW4.05. Candidate gene-based association mapping was conducted using a panel of 184 inbred lines with variable kernel weights and kernel sizes. Six polymorphic sites in the gene GRMZM2G039934 were significantly associated with kernel weight and kernel size. The results of linkage analysis and association mapping revealed that GRMZM2G039934 is the most likely candidate gene for qGW4.05. These results will improve our understanding of the genetic architecture and molecular mechanisms underlying kernel development in maize.

  7. Abiotic stress growth conditions induce different responses in kernel iron concentration across genotypically distinct maize inbred varieties

    PubMed Central

    Kandianis, Catherine B.; Michenfelder, Abigail S.; Simmons, Susan J.; Grusak, Michael A.; Stapleton, Ann E.

    2013-01-01

    The improvement of grain nutrient profiles for essential minerals and vitamins through breeding strategies is a target important for agricultural regions where nutrient poor crops like maize contribute a large proportion of the daily caloric intake. Kernel iron concentration in maize exhibits a broad range. However, the magnitude of genotype by environment (GxE) effects on this trait reduces the efficacy and predictability of selection programs, particularly when challenged with abiotic stress such as water and nitrogen limitations. Selection has also been limited by an inverse correlation between kernel iron concentration and the yield component of kernel size in target environments. Using 25 maize inbred lines for which extensive genome sequence data is publicly available, we evaluated the response of kernel iron density and kernel mass to water and nitrogen limitation in a managed field stress experiment using a factorial design. To further understand GxE interactions we used partition analysis to characterize response of kernel iron and weight to abiotic stressors among all genotypes, and observed two patterns: one characterized by higher kernel iron concentrations in control over stress conditions, and another with higher kernel iron concentration under drought and combined stress conditions. Breeding efforts for this nutritional trait could exploit these complementary responses through combinations of favorable allelic variation from these already well-characterized genetic stocks. PMID:24363659

  8. Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.

    PubMed

    Maulik, Ujjwal; Sarkar, Anasua

    2013-01-01

    Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request. sarkar@labri.fr.

  9. Single kernel ionomic profiles are highly heritable indicators of genetic and environmental influences on elemental accumulation in maize grain (Zea mays)

    USDA-ARS?s Scientific Manuscript database

    The ionome, or elemental profile, of a maize kernel represents at least two distinct ideas. First, the collection of elements within the kernel are food, feed and feedstocks for people, animals and industrial processes. Second, the ionome of the kernel represents a developmental end point that can s...

  10. Detection of aflatoxin B1 (AFB1) in individual maize kernels using short wave infrared (SWIR) hyperspectral imaging

    USDA-ARS?s Scientific Manuscript database

    Short wave infrared hyperspectral imaging (SWIR) (1000-2500 nm) was used to detect aflatoxin B1 (AFB1) in individual maize kernels. A total of 120 kernels of four varieties (or 30 kernels per variety) that had been artificially inoculated with a toxigenic strain of Aspergillus flavus and harvested f...

  11. New durum wheat with soft kernel texture: end-use quality analysis of the Hardness locus in Triticum turgidum ssp. durum

    USDA-ARS?s Scientific Manuscript database

    Wheat kernel texture dictates U.S. wheat market class. Durum wheat has limited demand and culinary end-uses compared to bread wheat because of its extremely hard kernel texture which precludes conventional milling. ‘Soft Svevo’, a new durum cultivar with soft kernel texture comparable to a soft whit...

  12. Chemical components of cold pressed kernel oils from different Torreya grandis cultivars.

    PubMed

    He, Zhiyong; Zhu, Haidong; Li, Wangling; Zeng, Maomao; Wu, Shengfang; Chen, Shangwei; Qin, Fang; Chen, Jie

    2016-10-15

    The chemical compositions of cold pressed kernel oils of seven Torreya grandis cultivars from China were analyzed in this study. The contents of the chemical components of T. grandis kernels and kernel oils varied to different extents with the cultivar. The T. grandis kernels contained relatively high oil and protein content (45.80-53.16% and 10.34-14.29%, respectively). The kernel oils were rich in unsaturated fatty acids including linoleic (39.39-47.77%), oleic (30.47-37.54%) and eicosatrienoic acid (6.78-8.37%). The kernel oils contained some abundant bioactive substances such as tocopherols (0.64-1.77mg/g) consisting of α-, β-, γ- and δ-isomers; sterols including β-sitosterol (0.90-1.29mg/g), campesterol (0.06-0.32mg/g) and stigmasterol (0.04-0.18mg/g) in addition to polyphenols (9.22-22.16μgGAE/g). The results revealed that the T. grandis kernel oils possessed the potentially important nutrition and health benefits and could be used as oils in the human diet or functional ingredients in the food industry. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Effect of delayed pMDI actuation on the lung deposition of a fixed-dose combination aerosol drug.

    PubMed

    Farkas, Árpád; Horváth, Alpár; Kerekes, Attila; Nagy, Attila; Kugler, Szilvia; Tamási, Lilla; Tomisa, Gábor

    2018-06-07

    Lack of coordination between the beginning of the inhalation and device triggering is one of the most frequent errors reported in connection with the use of pMDI devices. Earlier results suggested a significant loss in lung deposition as a consequence of late actuation. However, most of our knowledge on the effect of poor synchronization is based on earlier works on CFC devices emitting large particles with high initial velocities. The aim of this study was to apply numerical techniques to analyse the effect of late device actuation on the lung dose of a HFA pMDI drug emitting high fraction of extrafine particles used in current asthma and COPD therapy. A computational fluid and particle dynamics model was combined with stochastic whole lung model to quantify the amount of drug depositing in the extrathoracic airways and in the lungs. High speed camera measurements were also performed to characterize the emitted spray plume. Our results have shown that for the studied pMDI drug late actuation leads to reasonable loss in terms of lung dose, unless it happens in the second half of the inhalation period. Device actuation at the middle of the inhalation caused less than 25% lung dose reduction relative to the value characterizing perfect coordination, if the inhalation time was between 2-5 s and inhalation flow rate between 30-150 L/min. This dose loss is lower than the previously known values of CFC devices and further support the practice of triggering the device shortly after the beginning of the inhalation instead of forcing a perfect synchronization and risking mishandling and poor drug deposition. Copyright © 2018. Published by Elsevier B.V.

  14. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    NASA Astrophysics Data System (ADS)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  15. Searching for efficient Markov chain Monte Carlo proposal kernels

    PubMed Central

    Yang, Ziheng; Rodríguez, Carlos E.

    2013-01-01

    Markov chain Monte Carlo (MCMC) or the Metropolis–Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis–Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals. PMID:24218600

  16. An iterative kernel based method for fourth order nonlinear equation with nonlinear boundary condition

    NASA Astrophysics Data System (ADS)

    Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid

    2018-06-01

    This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.

  17. Achillea millefolium L. extract mediated green synthesis of waste peach kernel shell supported silver nanoparticles: Application of the nanoparticles for catalytic reduction of a variety of dyes in water.

    PubMed

    Khodadadi, Bahar; Bordbar, Maryam; Nasrollahzadeh, Mahmoud

    2017-05-01

    In this paper, silver nanoparticles (Ag NPs) are synthesized using Achillea millefolium L. extract as reducing and stabilizing agents and peach kernel shell as an environmentally benign support. FT-IR spectroscopy, UV-Vis spectroscopy, X-ray Diffraction (XRD), Field emission scanning electron microscopy (FESEM), Energy Dispersive X-ray Spectroscopy (EDS), Thermo gravimetric-differential thermal analysis (TG-DTA) and Transmission Electron Microscopy (TEM) were used to characterize peach kernel shell, Ag NPs, and Ag NPs/peach kernel shell. The catalytic activity of the Ag NPs/peach kernel shell was investigated for the reduction of 4-nitrophenol (4-NP), Methyl Orange (MO), and Methylene Blue (MB) at room temperature. Ag NPs/peach kernel shell was found to be a highly active catalyst. In addition, Ag NPs/peach kernel shell can be recovered and reused several times with no significant loss of its catalytic activity. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Integrating semantic information into multiple kernels for protein-protein interaction extraction from biomedical literatures.

    PubMed

    Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen

    2014-01-01

    Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information.

  19. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    PubMed

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  20. A Robustness Testing Campaign for IMA-SP Partitioning Kernels

    NASA Astrophysics Data System (ADS)

    Grixti, Stephen; Lopez Trecastro, Jorge; Sammut, Nicholas; Zammit-Mangion, David

    2015-09-01

    With time and space partitioned architectures becoming increasingly appealing to the European space sector, the dependability of partitioning kernel technology is a key factor to its applicability in European Space Agency projects. This paper explores the potential of the data type fault model, which injects faults through the Application Program Interface, in partitioning kernel robustness testing. This fault injection methodology has been tailored to investigate its relevance in uncovering vulnerabilities within partitioning kernels and potentially contributing towards fault removal campaigns within this domain. This is demonstrated through a robustness testing case study of the XtratuM partitioning kernel for SPARC LEON3 processors. The robustness campaign exposed a number of vulnerabilities in XtratuM, exhibiting the potential benefits of using such a methodology for the robustness assessment of partitioning kernels.

Top