Sample records for kernel dose calculation

  1. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  2. Evaluation of the influence of double and triple Gaussian proton kernel models on accuracy of dose calculations for spot scanning technique.

    PubMed

    Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki

    2016-03-01

    The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference between double and triple Gaussian kernel models. The authors found that the difference between the two studied kernel models appeared at mid-depths and the accuracy of predicting the double Gaussian model deteriorated at the low-dose bump that appeared at mid-depths. When the authors employed the double Gaussian kernel model, the accuracy of calculations for the absolute dose at the center of the SOBP varied with irradiation conditions and the maximum difference was 3.4%. In contrast, the results obtained from calculations with the triple Gaussian kernel model indicated good agreement with the measurements within ±1.1%, regardless of the irradiation conditions. The difference between the results obtained with the two types of studied kernel models was distinct in the high energy region. The accuracy of calculations with the double Gaussian kernel model varied with the field size and SOBP width because the accuracy of prediction with the double Gaussian model was insufficient at the low-dose bump. The evaluation was only qualitative under limited volumetric irradiation conditions. Further accumulation of measured data would be needed to quantitatively comprehend what influence the double and triple Gaussian kernel models had on the accuracy of dose calculations.

  3. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, J; Followill, D; Howell, R

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less

  4. TH-C-BRD-04: Beam Modeling and Validation with Triple and Double Gaussian Dose Kernel for Spot Scanning Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Takayanagi, T; Fujii, Y

    2014-06-15

    Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less

  5. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamberlain, S; Roswell Park Cancer Institute, Buffalo, NY; French, S

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3more » × 10{sup 6} to 3 × 10{sup 7}); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10{sup 6} was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10{sup 6} have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.« less

  6. A point kernel algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan

    2017-11-01

    Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.

  7. SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriya, S; Sato, M; Tachibana, H

    Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation runningmore » on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.« less

  8. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Tian, Z; Song, T

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less

  9. SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.

    PubMed

    Huang, J; Childress, N; Kry, S

    2012-06-01

    To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition () is 28.6° for water, 33.3° for lung, 36.0° for cortical bone, 44.6° for titanium, and 58.1° for silver, showing a dependence on the material in which the primary photon interacts. These high resolution and material-specific EDKs have implications for convolution/superposition dose calculations in heterogeneous patient geometries, especially at material interfaces. © 2012 American Association of Physicists in Medicine.

  10. A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.

    PubMed

    Bartzsch, Stefan; Oelfke, Uwe

    2013-11-01

    The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.

  11. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  12. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    PubMed

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  13. Approaches to reducing photon dose calculation errors near metal implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Jessie Y.; Followill, David S.; Howell, Reb

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less

  14. Comparison of GATE/GEANT4 with EGSnrc and MCNP for electron dose calculations at energies between 15 keV and 20 MeV.

    PubMed

    Maigne, L; Perrot, Y; Schaart, D R; Donnarieix, D; Breton, V

    2011-02-07

    The GATE Monte Carlo simulation platform based on the GEANT4 toolkit has come into widespread use for simulating positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging devices. Here, we explore its use for calculating electron dose distributions in water. Mono-energetic electron dose point kernels and pencil beam kernels in water are calculated for different energies between 15 keV and 20 MeV by means of GATE 6.0, which makes use of the GEANT4 version 9.2 Standard Electromagnetic Physics Package. The results are compared to the well-validated codes EGSnrc and MCNP4C. It is shown that recent improvements made to the GEANT4/GATE software result in significantly better agreement with the other codes. We furthermore illustrate several issues of general interest to GATE and GEANT4 users who wish to perform accurate simulations involving electrons. Provided that the electron step size is sufficiently restricted, GATE 6.0 and EGSnrc dose point kernels are shown to agree to within less than 3% of the maximum dose between 50 keV and 4 MeV, while pencil beam kernels are found to agree to within less than 4% of the maximum dose between 15 keV and 20 MeV.

  15. Experimental pencil beam kernels derivation for 3D dose calculation in flattening filter free modulated fields

    NASA Astrophysics Data System (ADS)

    Diego Azcona, Juan; Barbés, Benigno; Wang, Lilie; Burguete, Javier

    2016-01-01

    This paper presents a method to obtain the pencil-beam kernels that characterize a megavoltage photon beam generated in a flattening filter free (FFF) linear accelerator (linac) by deconvolution from experimental measurements at different depths. The formalism is applied to perform independent dose calculations in modulated fields. In our previous work a formalism was developed for ideal flat fluences exiting the linac’s head. That framework could not deal with spatially varying energy fluences, so any deviation from the ideal flat fluence was treated as a perturbation. The present work addresses the necessity of implementing an exact analysis where any spatially varying fluence can be used such as those encountered in FFF beams. A major improvement introduced here is to handle the actual fluence in the deconvolution procedure. We studied the uncertainties associated to the kernel derivation with this method. Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from two linacs from different vendors, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water-equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50mm diameter circular field, collimated with a lead block. The 3D kernel for a FFF beam was obtained by deconvolution using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. Error uncertainty in the kernel derivation procedure was estimated to be within 0.2%. Eighteen modulated fields used clinically in different treatment localizations were irradiated at four measurement depths (total of fifty-four film measurements). Comparison through the gamma-index to their corresponding calculated absolute dose distributions showed a number of passing points (3%, 3mm) mostly above 99%. This new procedure is more reliable and robust than the previous one. Its ability to perform accurate independent dose calculations was demonstrated.

  16. Absorbed dose kernel and self-shielding calculations for a novel radiopaque glass microsphere for transarterial radioembolization.

    PubMed

    Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair

    2018-02-01

    Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the microspheres based on weighted activities. The shapes of the absorbed dose kernels are dominated at short times postactivation by the contributions of 70 Ga and 72 Ga. Following decay of the short-lived contaminants, the absorbed dose kernel is effectively that of 90 Y. After approximately 1000 h postactivation, the contributions of 85 Sr and 89 Sr become increasingly dominant, though the absorbed dose-rate around the beads drops by roughly four orders of magnitude. The introduction of high atomic number elements for the purpose of increasing radiopacity necessarily leads to the production of radionuclides other than 90 Y in the microspheres. Most of the radionuclides in this study are short-lived and are likely not of any significant concern for this therapeutic agent. The presence of small quantities of longer lived radionuclides will change the shape of the absorbed dose kernel around a microsphere at long time points postadministration when activity levels are significantly reduced. © 2017 American Association of Physicists in Medicine.

  17. Generation of a novel phase-space-based cylindrical dose kernel for IMRT optimization.

    PubMed

    Zhong, Hualiang; Chetty, Indrin J

    2012-05-01

    Improving dose calculation accuracy is crucial in intensity-modulated radiation therapy (IMRT). We have developed a method for generating a phase-space-based dose kernel for IMRT planning of lung cancer patients. Particle transport in the linear accelerator treatment head of a 21EX, 6 MV photon beam (Varian Medical Systems, Palo Alto, CA) was simulated using the EGSnrc/BEAMnrc code system. The phase space information was recorded under the secondary jaws. Each particle in the phase space file was associated with a beamlet whose index was calculated and saved in the particle's LATCH variable. The DOSXYZnrc code was modified to accumulate the energy deposited by each particle based on its beamlet index. Furthermore, the central axis of each beamlet was calculated from the orientation of all the particles in this beamlet. A cylinder was then defined around the central axis so that only the energy deposited within the cylinder was counted. A look-up table was established for each cylinder during the tallying process. The efficiency and accuracy of the cylindrical beamlet energy deposition approach was evaluated using a treatment plan developed on a simulated lung phantom. Profile and percentage depth doses computed in a water phantom for an open, square field size were within 1.5% of measurements. Dose optimized with the cylindrical dose kernel was found to be within 0.6% of that computed with the nontruncated 3D kernel. The cylindrical truncation reduced optimization time by approximately 80%. A method for generating a phase-space-based dose kernel, using a truncated cylinder for scoring dose, in beamlet-based optimization of lung treatment planning was developed and found to be in good agreement with the standard, nontruncated scoring approach. Compared to previous techniques, our method significantly reduces computational time and memory requirements, which may be useful for Monte-Carlo-based 4D IMRT or IMAT treatment planning.

  18. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.

  19. Technical Note: Dose gradients and prescription isodose in orthovoltage stereotactic radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagerstrom, Jessica M., E-mail: fagerstrom@wisc.edu; Bender, Edward T.; Culberson, Wesley S.

    Purpose: The purpose of this work is to examine the trade-off between prescription isodose and dose gradients in orthovoltage stereotactic radiosurgery. Methods: Point energy deposition kernels (EDKs) describing photon and electron transport were calculated using Monte Carlo methods. EDKs were generated from 10  to 250 keV, in 10 keV increments. The EDKs were converted to pencil beam kernels and used to calculate dose profiles through isocenter from a 4π isotropic delivery from all angles of circularly collimated beams. Monoenergetic beams and an orthovoltage polyenergetic spectrum were analyzed. The dose gradient index (DGI) is the ratio of the 50% prescription isodosemore » volume to the 100% prescription isodose volume and represents a metric by which dose gradients in stereotactic radiosurgery (SRS) may be evaluated. Results: Using the 4π dose profiles calculated using pencil beam kernels, the relationship between DGI and prescription isodose was examined for circular cones ranging from 4 to 18 mm in diameter and monoenergetic photon beams with energies ranging from 20 to 250 keV. Values were found to exist for prescription isodose that optimize DGI. Conclusions: The relationship between DGI and prescription isodose was found to be dependent on both field size and energy. Examining this trade-off is an important consideration for designing optimal SRS systems.« less

  20. Dosimetric verification of radiation therapy including intensity modulated treatments, using an amorphous-silicon electronic portal imaging device

    NASA Astrophysics Data System (ADS)

    Chytyk-Praznik, Krista Joy

    Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (˜1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients' delivered radiation treatments.

  1. Suitability of point kernel dose calculation techniques in brachytherapy treatment planning

    PubMed Central

    Lakshminarayanan, Thilagam; Subbaiah, K. V.; Thayalan, K.; Kannan, S. E.

    2010-01-01

    Brachytherapy treatment planning system (TPS) is necessary to estimate the dose to target volume and organ at risk (OAR). TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC) results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i) Board of Radiation Isotope and Technology (BRIT) low dose rate (LDR) applicator and (ii) Fletcher Green type LDR applicator (iii) Fletcher Williamson high dose rate (HDR) applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron). The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5.5% for BRIT LDR applicator, found to vary from 2.6 to 5.1% for Fletcher green type LDR applicator and are up to −4.7% for Fletcher-Williamson HDR applicator. The isodose distribution plots also show good agreements with the results of previous literatures. The isodose distributions around the shielded vaginal cylinder computed using BrachyTPS code show better agreement (less than two per cent deviation) with MC results in the unshielded region compared to shielded region, where the deviations are observed up to five per cent. The present study implies that the accurate and fast validation of complicated treatment planning calculations is possible with the point kernel code package. PMID:20589118

  2. SU-F-J-133: Adaptive Radiation Therapy with a Four-Dimensional Dose Calculation Algorithm That Optimizes Dose Distribution Considering Breathing Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, I; Algan, O; Ahmad, S

    Purpose: To model patient motion and produce four-dimensional (4D) optimized dose distributions that consider motion-artifacts in the dose calculation during the treatment planning process. Methods: An algorithm for dose calculation is developed where patient motion is considered in dose calculation at the stage of the treatment planning. First, optimal dose distributions are calculated for the stationary target volume where the dose distributions are optimized considering intensity-modulated radiation therapy (IMRT). Second, a convolution-kernel is produced from the best-fitting curve which matches the motion trajectory of the patient. Third, the motion kernel is deconvolved with the initial dose distribution optimized for themore » stationary target to produce a dose distribution that is optimized in four-dimensions. This algorithm is tested with measured doses using a mobile phantom that moves with controlled motion patterns. Results: A motion-optimized dose distribution is obtained from the initial dose distribution of the stationary target by deconvolution with the motion-kernel of the mobile target. This motion-optimized dose distribution is equivalent to that optimized for the stationary target using IMRT. The motion-optimized and measured dose distributions are tested with the gamma index with a passing rate of >95% considering 3% dose-difference and 3mm distance-to-agreement. If the dose delivery per beam takes place over several respiratory cycles, then the spread-out of the dose distributions is only dependent on the motion amplitude and not affected by motion frequency and phase. This algorithm is limited to motion amplitudes that are smaller than the length of the target along the direction of motion. Conclusion: An algorithm is developed to optimize dose in 4D. Besides IMRT that provides optimal dose coverage for a stationary target, it extends dose optimization to 4D considering target motion. This algorithm provides alternative to motion management techniques such as beam-gating or breath-holding and has potential applications in adaptive radiation therapy.« less

  3. A dose assessment method for arbitrary geometries with virtual reality in the nuclear facilities decommissioning

    NASA Astrophysics Data System (ADS)

    Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu

    2018-03-01

    During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.

  4. Dose calculation algorithm of fast fine-heterogeneity correction for heavy charged particle radiotherapy.

    PubMed

    Kanematsu, Nobuyuki

    2011-04-01

    This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results:more » A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.« less

  6. Evaluation of the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels using particle and heavy ion transport code system: PHITS.

    PubMed

    Shiiba, Takuro; Kuga, Naoya; Kuroiwa, Yasuyoshi; Sato, Tatsuhiko

    2017-10-01

    We assessed the accuracy of mono-energetic electron and beta-emitting isotope dose-point kernels (DPKs) calculated using the particle and heavy ion transport code system (PHITS) for patient-specific dosimetry in targeted radionuclide treatment (TRT) and compared our data with published data. All mono-energetic and beta-emitting isotope DPKs calculated using PHITS, both in water and compact bone, were in good agreement with those in literature using other MC codes. PHITS provided reliable mono-energetic electron and beta-emitting isotope scaled DPKs for patient-specific dosimetry. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. SU-E-T-02: 90Y Microspheres Dosimetry Calculation with Voxel-S-Value Method: A Simple Use in the Clinic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maneru, F; Gracia, M; Gallardo, N

    2015-06-15

    Purpose: To present a simple and feasible method of voxel-S-value (VSV) dosimetry calculation for daily clinical use in radioembolization (RE) with {sup 90}Y microspheres. Dose distributions are obtained and visualized over CT images. Methods: Spatial dose distributions and dose in liver and tumor are calculated for RE patients treated with Sirtex Medical miscrospheres at our center. Data obtained from the previous simulation of treatment were the basis for calculations: Tc-99m maggregated albumin SPECT-CT study in a gammacamera (Infinia, General Electric Healthcare.). Attenuation correction and ordered-subsets expectation maximization (OSEM) algorithm were applied.For VSV calculations, both SPECT and CT were exported frommore » the gammacamera workstation and registered with the radiotherapy treatment planning system (Eclipse, Varian Medical systems). Convolution of activity matrix and local dose deposition kernel (S values) was implemented with an in-house developed software based on Python code. The kernel was downloaded from www.medphys.it. Final dose distribution was evaluated with the free software Dicompyler. Results: Liver mean dose is consistent with Partition method calculations (accepted as a good standard). Tumor dose has not been evaluated due to the high dependence on its contouring. Small lesion size, hot spots in health tissue and blurred limits can affect a lot the dose distribution in tumors. Extra work includes: export and import of images and other dicom files, create and calculate a dummy plan of external radiotherapy, convolution calculation and evaluation of the dose distribution with dicompyler. Total time spent is less than 2 hours. Conclusion: VSV calculations do not require any extra appointment or any uncomfortable process for patient. The total process is short enough to carry it out the same day of simulation and to contribute to prescription decisions prior to treatment. Three-dimensional dose knowledge provides much more information than other methods of dose calculation usually applied in the clinic.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Liu, B; Liang, B

    Purpose: Current CyberKnife treatment planning system (TPS) provided two dose calculation algorithms: Ray-tracing and Monte Carlo. Ray-tracing algorithm is fast, but less accurate, and also can’t handle irregular fields since a multi-leaf collimator system was recently introduced to CyberKnife M6 system. Monte Carlo method has well-known accuracy, but the current version still takes a long time to finish dose calculations. The purpose of this paper is to develop a GPU-based fast C/S dose engine for CyberKnife system to achieve both accuracy and efficiency. Methods: The TERMA distribution from a poly-energetic source was calculated based on beam’s eye view coordinate system,more » which is GPU friendly and has linear complexity. The dose distribution was then computed by inversely collecting the energy depositions from all TERMA points along 192 collapsed-cone directions. EGSnrc user code was used to pre-calculate energy deposition kernels (EDKs) for a series of mono-energy photons The energy spectrum was reconstructed based on measured tissue maximum ratio (TMR) curve, the TERMA averaged cumulative kernels was then calculated. Beam hardening parameters and intensity profiles were optimized based on measurement data from CyberKnife system. Results: The difference between measured and calculated TMR are less than 1% for all collimators except in the build-up regions. The calculated profiles also showed good agreements with the measured doses within 1% except in the penumbra regions. The developed C/S dose engine was also used to evaluate four clinical CyberKnife treatment plans, the results showed a better dose calculation accuracy than Ray-tracing algorithm compared with Monte Carlo method for heterogeneous cases. For the dose calculation time, it takes about several seconds for one beam depends on collimator size and dose calculation grids. Conclusion: A GPU-based C/S dose engine has been developed for CyberKnife system, which was proven to be efficient and accurate for clinical purpose, and can be easily implemented in TPS.« less

  10. Calculation of electron Dose Point Kernel in water with GEANT4 for medical application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guimaraes, C. C.; Sene, F. F.; Martinelli, J. R.

    2009-06-03

    The rapid insertion of new technologies in medical physics in the last years, especially in nuclear medicine, has been followed by a great development of faster Monte Carlo algorithms. GEANT4 is a Monte Carlo toolkit that contains the tools to simulate the problems of particle transport through matter. In this work, GEANT4 was used to calculate the dose-point-kernel (DPK) for monoenergetic electrons in water, which is an important reference medium for nuclear medicine. The three different physical models of electromagnetic interactions provided by GEANT4 - Low Energy, Penelope and Standard - were employed. To verify the adequacy of these models,more » the results were compared with references from the literature. For all energies and physical models, the agreement between calculated DPKs and reported values is satisfactory.« less

  11. An efficient method to determine double Gaussian fluence parameters in the eclipse™ proton pencil beam model.

    PubMed

    Shen, Jiajian; Liu, Wei; Stoker, Joshua; Ding, Xiaoning; Anand, Aman; Hu, Yanle; Herman, Michael G; Bues, Martin

    2016-12-01

    To find an efficient method to configure the proton fluence for a commercial proton pencil beam scanning (PBS) treatment planning system (TPS). An in-water dose kernel was developed to mimic the dose kernel of the pencil beam convolution superposition algorithm, which is part of the commercial proton beam therapy planning software, eclipse™ (Varian Medical Systems, Palo Alto, CA). The field size factor (FSF) was calculated based on the spot profile reconstructed by the in-house dose kernel. The workflow of using FSFs to find the desirable proton fluence is presented. The in-house derived spot profile and FSF were validated by a direct comparison with those calculated by the eclipse TPS. The validation included 420 comparisons of the FSFs from 14 proton energies, various field sizes from 2 to 20 cm and various depths from 20% to 80% of proton range. The relative in-water lateral profiles between the in-house calculation and the eclipse TPS agree very well even at the level of 10 -4 . The FSFs between the in-house calculation and the eclipse TPS also agree well. The maximum deviation is within 0.5%, and the standard deviation is less than 0.1%. The authors' method significantly reduced the time to find the desirable proton fluences of the clinical energies. The method is extensively validated and can be applied to any proton centers using PBS and the eclipse TPS.

  12. An analytical dose-averaged LET calculation algorithm considering the off-axis LET enhancement by secondary protons for spot-scanning proton therapy.

    PubMed

    Hirayama, Shusuke; Matsuura, Taeko; Ueda, Hideaki; Fujii, Yusuke; Fujii, Takaaki; Takao, Seishin; Miyamoto, Naoki; Shimizu, Shinichi; Fujimoto, Rintaro; Umegaki, Kikuo; Shirato, Hiroki

    2018-05-22

    To evaluate the biological effects of proton beams as part of daily clinical routine, fast and accurate calculation of dose-averaged linear energy transfer (LET d ) is required. In this study, we have developed the analytical LET d calculation method based on the pencil-beam algorithm (PBA) considering the off-axis enhancement by secondary protons. This algorithm (PBA-dLET) was then validated using Monte Carlo simulation (MCS) results. In PBA-dLET, LET values were assigned separately for each individual dose kernel based on the PBA. For the dose kernel, we employed a triple Gaussian model which consists of the primary component (protons that undergo the multiple Coulomb scattering) and the halo component (protons that undergo inelastic, nonelastic and elastic nuclear reaction); the primary and halo components were represented by a single Gaussian and the sum of two Gaussian distributions, respectively. Although the previous analytical approaches assumed a constant LET d value for the lateral distribution of a pencil beam, the actual LET d increases away from the beam axis, because there are more scattered and therefore lower energy protons with higher stopping powers. To reflect this LET d behavior, we have assumed that the LETs of primary and halo components can take different values (LET p and LET halo ), which vary only along the depth direction. The values of dual-LET kernels were determined such that the PBA-dLET reproduced the MCS-generated LET d distribution in both small and large fields. These values were generated at intervals of 1 mm in depth for 96 energies from 70.2 to 220 MeV and collected in the look-up table. Finally, we compared the LET d distributions and mean LET d (LET d,mean ) values of targets and organs at risk between PBA-dLET and MCS. Both homogeneous phantom and patient geometries (prostate, liver, and lung cases) were used to validate the present method. In the homogeneous phantom, the LET d profiles obtained by the dual-LET kernels agree well with the MCS results except for the low-dose region in the lateral penumbra, where the actual dose was below 10% of the maximum dose. In the patient geometry, the LET d profiles calculated with the developed method reproduces MCS with the similar accuracy as in the homogeneous phantom. The maximum differences in LET d,mean for each structure between the PBA-dLET and the MCS were 0.06 keV/μm in homogeneous phantoms and 0.08 keV/μm in patient geometries under all tested conditions, respectively. We confirmed that the dual-LET-kernel model well reproduced the MCS, not only in the homogeneous phantom but also in complex patient geometries. The accuracy of the LET d was largely improved from the single-LET-kernel model, especially at the lateral penumbra. The model is expected to be useful, especially for proper recognition of the risk of side effects when the target is next to critical organs. © 2018 American Association of Physicists in Medicine.

  13. Ford Motor Company NDE facility shielding design.

    PubMed

    Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H

    2005-01-01

    Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.

  14. Total Ambient Dose Equivalent Buildup Factor Determination for Nbs04 Concrete.

    PubMed

    Duckic, Paulina; Hayes, Robert B

    2018-06-01

    Buildup factors are dimensionless multiplicative factors required by the point kernel method to account for scattered radiation through a shielding material. The accuracy of the point kernel method is strongly affected by the correspondence of analyzed parameters to experimental configurations, which is attempted to be simplified here. The point kernel method has not been found to have widespread practical use for neutron shielding calculations due to the complex neutron transport behavior through shielding materials (i.e. the variety of interaction mechanisms that neutrons may undergo while traversing the shield) as well as non-linear neutron total cross section energy dependence. In this work, total ambient dose buildup factors for NBS04 concrete are calculated in terms of neutron and secondary gamma ray transmission factors. The neutron and secondary gamma ray transmission factors are calculated using MCNP6™ code with updated cross sections. Both transmission factors and buildup factors are given in a tabulated form. Practical use of neutron transmission and buildup factors warrants rigorously calculated results with all associated uncertainties. In this work, sensitivity analysis of neutron transmission factors and total buildup factors with varying water content has been conducted. The analysis showed significant impact of varying water content in concrete on both neutron transmission factors and total buildup factors. Finally, support vector regression, a machine learning technique, has been engaged to make a model based on the calculated data for calculation of the buildup factors. The developed model can predict most of the data with 20% relative error.

  15. Statistic and dosimetric criteria to assess the shift of the prescribed dose for lung radiotherapy plans when integrating point kernel models in medical physics: are we ready?

    PubMed

    Chaikh, Abdulhamid; Balosso, Jacques

    2016-12-01

    To apply the statistical bootstrap analysis and dosimetric criteria's to assess the change of prescribed dose (PD) for lung cancer to maintain the same clinical results when using new generations of dose calculation algorithms. Nine lung cancer cases were studied. For each patient, three treatment plans were generated using exactly the same beams arrangements. In plan 1, the dose was calculated using pencil beam convolution (PBC) algorithm turning on heterogeneity correction with modified batho (PBC-MB). In plan 2, the dose was calculated using anisotropic analytical algorithm (AAA) and the same PD, as plan 1. In plan 3, the dose was calculated using AAA with monitor units (MUs) obtained from PBC-MB, as input. The dosimetric criteria's include MUs, delivered dose at isocentre (Diso) and calculated dose to 95% of the target volume (D95). The bootstrap method was used to assess the significance of the dose differences and to accurately estimate the 95% confidence interval (95% CI). Wilcoxon and Spearman's rank tests were used to calculate P values and the correlation coefficient (ρ). Statistically significant for dose difference was found using point kernel model. A good correlation was observed between both algorithms types, with ρ>0.9. Using AAA instead of PBC-MB, an adjustment of the PD in the isocentre is suggested. For a given set of patients, we assessed the need to readjust the PD for lung cancer using dosimetric indices and bootstrap statistical method. Thus, if the goal is to keep on with the same clinical results, the PD for lung tumors has to be adjusted with AAA. According to our simulation we suggest to readjust the PD by 5% and an optimization for beam arrangements to better protect the organs at risks (OARs).

  16. MO-G-17A-06: Kernel Based Dosimetry for 90Y Microsphere Liver Therapy Using 90Y Bremsstrahlung SPECT/CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikell, J; Siman, W; Kappadath, S

    2014-06-15

    Purpose: 90Y microsphere therapy in liver presents a situation where beta transport is dominant and the tissue is relatively homogenous. We compare voxel-based absorbed doses from a 90Y kernel to Monte Carlo (MC) using quantitative 90Y bremsstrahlung SPECT/CT as source distribution. Methods: Liver, normal liver, and tumors were delineated by an interventional radiologist using contrast-enhanced CT registered with 90Y SPECT/CT scans for 14 therapies. Right lung was segmented via region growing. The kernel was generated with 1.04 g/cc soft tissue for 4.8 mm voxel matching the SPECT. MC simulation materials included air, lung, soft tissue, and bone with varying densities.more » We report percent difference between kernel and MC (%Δ(K,MC)) for mean absorbed dose, D70, and V20Gy in total liver, normal liver, tumors, and right lung. We also report %Δ(K,MC) for heterogeneity metrics: coefficient of variation (COV) and D10/D90. The impact of spatial resolution (0, 10, 20 mm FWHM) and lung shunt fraction (LSF) (1,5,10,20%) on the accuracy of MC and kernel doses near the liver-lung interface was modeled in 1D. We report the distance from the interface where errors become <10% of unblurred MC as d10(side of interface, dose calculation, FWHM blurring, LSF). Results: The %Δ(K,MC) for mean, D70, and V20Gy in tumor and liver was <7% while right lung differences varied from 60–90%. The %Δ(K,MC) for COV was <4.8% for tumor and liver and <54% for the right lung. The %Δ(K,MC) for D10/D90 was <5% for 22/23 tumors. d10(liver,MC,10,1–20) awere <9mm and d10(liver,MC,20,1–20) awere <15mm; both agreed within 3mm to the kernel. d10(lung,MC,10,20), d10(lung,MC,10,1), d10(lung,MC,20,20), and d10(lung,MC,20,1) awere 6, 25, 15, and 34mm, respectively. Kernel calculations on blurred distributions in lung had errors > 10%. Conclusions: Liver and tumor voxel doses with 90Y kernel and MC agree within 7%. Large differences exist between the two methods in right lung. Research reported in this publication was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA138986. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less

  17. Neutron dose rate analysis on HTGR-10 reactor using Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Suwoto; Adrial, H.; Hamzah, A.; Zuhair; Bakhri, S.; Sunaryo, G. R.

    2018-02-01

    The HTGR-10 reactor is cylinder-shaped core fuelled with kernel TRISO coated fuel particles in the spherical pebble with helium cooling system. The outlet helium gas coolant temperature outputted from the reactor core is designed to 700 °C. One advantage HTGR type reactor is capable of co-generation, as an addition to generating electricity, the reactor was designed to produce heat at high temperature can be used for other processes. The spherical fuel pebble contains 8335 TRISO UO2 kernel coated particles with enrichment of 10% and 17% are dispersed in a graphite matrix. The main purpose of this study was to analysis the distribution of neutron dose rates generated from HTGR-10 reactors. The calculation and analysis result of neutron dose rate in the HTGR-10 reactor core was performed using Monte Carlo MCNP5v1.6 code. The problems of double heterogeneity in kernel fuel coated particles TRISO and spherical fuel pebble in the HTGR-10 core are modelled well with MCNP5v1.6 code. The neutron flux to dose conversion factors taken from the International Commission on Radiological Protection (ICRP-74) was used to determine the dose rate that passes through the active core, reflectors, core barrel, reactor pressure vessel (RPV) and a biological shield. The calculated results of neutron dose rate with MCNP5v1.6 code using a conversion factor of ICRP-74 (2009) for radiation workers in the radial direction on the outside of the RPV (radial position = 220 cm from the center of the patio HTGR-10) provides the respective value of 9.22E-4 μSv/h and 9.58E-4 μSv/h for enrichment 10% and 17%, respectively. The calculated values of neutron dose rates are compliant with BAPETEN Chairman’s Regulation Number 4 Year 2013 on Radiation Protection and Safety in Nuclear Energy Utilization which sets the limit value for the average effective dose for radiation workers 20 mSv/year or 10μSv/h. Thus the protection and safety for radiation workers to be safe from the radiation source has been fulfilled. From the result analysis, it can be concluded that the model of calculation result of neutron dose rate for HTGR-10 core has met the required radiation safety standards.

  18. Handling Density Conversion in TPS.

    PubMed

    Isobe, Tomonori; Mori, Yutaro; Takei, Hideyuki; Sato, Eisuke; Tadano, Kiichi; Kobayashi, Daisuke; Tomita, Tetsuya; Sakae, Takeji

    2016-01-01

    Conversion from CT value to density is essential to a radiation treatment planning system. Generally CT value is converted to the electron density in photon therapy. In the energy range of therapeutic photon, interactions between photons and materials are dominated with Compton scattering which the cross-section depends on the electron density. The dose distribution is obtained by calculating TERMA and kernel using electron density where TERMA is the energy transferred from primary photons and kernel is a volume considering spread electrons. Recently, a new method was introduced which uses the physical density. This method is expected to be faster and more accurate than that using the electron density. As for particle therapy, dose can be calculated with CT-to-stopping power conversion since the stopping power depends on the electron density. CT-to-stopping power conversion table is also called as CT-to-water-equivalent range and is an essential concept for the particle therapy.

  19. MO-FG-CAMPUS-TeP1-05: Rapid and Efficient 3D Dosimetry for End-To-End Patient-Specific QA of Rotational SBRT Deliveries Using a High-Resolution EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Han, B; Xing, L

    2016-06-15

    Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klüter, Sebastian, E-mail: sebastian.klueter@med.uni-heidelberg.de; Schubert, Kai; Lissner, Steffen

    Purpose: The dosimetric verification of treatment plans in helical tomotherapy usually is carried out via verification measurements. In this study, a method for independent dose calculation of tomotherapy treatment plans is presented, that uses a conventional treatment planning system with a pencil kernel dose calculation algorithm for generation of verification dose distributions based on patient CT data. Methods: A pencil beam algorithm that directly uses measured beam data was configured for dose calculation for a tomotherapy machine. Tomotherapy treatment plans were converted into a format readable by an in-house treatment planning system by assigning each projection to one static treatmentmore » field and shifting the calculation isocenter for each field in order to account for the couch movement. The modulation of the fluence for each projection is read out of the delivery sinogram, and with the kernel-based dose calculation, this information can directly be used for dose calculation without the need for decomposition of the sinogram. The sinogram values are only corrected for leaf output and leaf latency. Using the converted treatment plans, dose was recalculated with the independent treatment planning system. Multiple treatment plans ranging from simple static fields to real patient treatment plans were calculated using the new approach and either compared to actual measurements or the 3D dose distribution calculated by the tomotherapy treatment planning system. In addition, dose–volume histograms were calculated for the patient plans. Results: Except for minor deviations at the maximum field size, the pencil beam dose calculation for static beams agreed with measurements in a water tank within 2%/2 mm. A mean deviation to point dose measurements in the cheese phantom of 0.89% ± 0.81% was found for unmodulated helical plans. A mean voxel-based deviation of −0.67% ± 1.11% for all voxels in the respective high dose region (dose values >80%), and a mean local voxel-based deviation of −2.41% ± 0.75% for all voxels with dose values >20% were found for 11 modulated plans in the cheese phantom. Averaged over nine patient plans, the deviations amounted to −0.14% ± 1.97% (voxels >80%) and −0.95% ± 2.27% (>20%, local deviations). For a lung case, mean voxel-based deviations of more than 4% were found, while for all other patient plans, all mean voxel-based deviations were within ±2.4%. Conclusions: The presented method is suitable for independent dose calculation for helical tomotherapy within the known limitations of the pencil beam algorithm. It can serve as verification of the primary dose calculation and thereby reduce the need for time-consuming measurements. By using the patient anatomy and generating full 3D dose data, and combined with measurements of additional machine parameters, it can substantially contribute to overall patient safety.« less

  1. Effect of ultra-low doses, ASIR and MBIR on density and noise levels of MDCT images of dental implant sites.

    PubMed

    Widmann, Gerlig; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Al-Ekrish, Asma'a A

    2017-05-01

    Differences in noise and density values in MDCT images obtained using ultra-low doses with FBP, ASIR, and MBIR may possibly affect implant site density analysis. The aim of this study was to compare density and noise measurements recorded from dental implant sites using ultra-low doses combined with FBP, ASIR, and MBIR. Cadavers were scanned using a standard protocol and four low-dose protocols. Scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Density (mean Hounsfield units [HUs]) of alveolar bone and noise levels (mean standard deviation of HUs) was recorded from all datasets and measurements were compared by paired t tests and two-way ANOVA with repeated measures. Significant differences in density and noise were found between the reference dose/FBP protocol and almost all test combinations. Maximum mean differences in HU were 178.35 (bone kernel) and 273.74 (standard kernel), and in noise, were 243.73 (bone kernel) and 153.88 (standard kernel). Decreasing radiation dose increased density and noise regardless of reconstruction technique and kernel. The effect of reconstruction technique on density and noise depends on the reconstruction kernel used. • Ultra-low-dose MDCT protocols allowed more than 90 % reductions in dose. • Decreasing the dose generally increased density and noise. • Effect of IRT on density and noise varies with reconstruction kernel. • Accuracy of low-dose protocols for interpretation of bony anatomy not known. • Effect of low doses on accuracy of computer-aided design models unknown.

  2. Virtual reality based adaptive dose assessment method for arbitrary geometries in nuclear facility decommissioning.

    PubMed

    Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun

    2018-05-17

    This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.

  3. Upgrade to iterative image reconstruction (IR) in MDCT imaging: a clinical study for detailed parameter optimization beyond vendor recommendations using the adaptive statistical iterative reconstruction environment (ASIR) Part2: The chest.

    PubMed

    Mueck, F G; Michael, L; Deak, Z; Scherr, M K; Maxien, D; Geyer, L L; Reiser, M; Wirth, S

    2013-07-01

    To compare the image quality in dose-reduced 64-row CT of the chest at different levels of adaptive statistical iterative reconstruction (ASIR) to full-dose baseline examinations reconstructed solely with filtered back projection (FBP) in a realistic upgrade scenario. A waiver of consent was granted by the institutional review board (IRB). The noise index (NI) relates to the standard deviation of Hounsfield units in a water phantom. Baseline exams of the chest (NI = 29; LightSpeed VCT XT, GE Healthcare) were intra-individually compared to follow-up studies on a CT with ASIR after system upgrade (NI = 45; Discovery HD750, GE Healthcare), n = 46. Images were calculated in slice and volume mode with ASIR levels of 0 - 100 % in the standard and lung kernel. Three radiologists independently compared the image quality to the corresponding full-dose baseline examinations (-2: diagnostically inferior, -1: inferior, 0: equal, + 1: superior, + 2: diagnostically superior). Statistical analysis used Wilcoxon's test, Mann-Whitney U test and the intraclass correlation coefficient (ICC). The mean CTDIvol decreased by 53 % from the FBP baseline to 8.0 ± 2.3 mGy for ASIR follow-ups; p < 0.001. The ICC was 0.70. Regarding the standard kernel, the image quality in dose-reduced studies was comparable to the baseline at ASIR 70 % in volume mode (-0.07 ± 0.29, p = 0.29). Concerning the lung kernel, every ASIR level outperformed the baseline image quality (p < 0.001), with ASIR 30 % rated best (slice: 0.70 ± 0.6, volume: 0.74 ± 0.61). Vendors' recommendation of 50 % ASIR is fair. In detail, the ASIR 70 % in volume mode for the standard kernel and ASIR 30 % for the lung kernel performed best, allowing for a dose reduction of approximately 50 %. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Effective dose rate coefficients for exposure to contaminated soil

    DOE PAGES

    Veinot, Kenneth G.; Eckerman, Keith F.; Bellamy, Michael B.; ...

    2017-05-10

    The Oak Ridge National Laboratory Center for Radiation Protection Knowledge has undertaken calculations related to various environmental exposure scenarios. A previous paper reported the results for submersion in radioactive air and immersion in water using age-specific mathematical phantoms. This paper presents age-specific effective dose rate coefficients derived using stylized mathematical phantoms for exposure to contaminated soils. Dose rate coefficients for photon, electron, and positrons of discrete energies were calculated and folded with emissions of 1252 radionuclides addressed in ICRP Publication 107 to determine equivalent and effective dose rate coefficients. The MCNP6 radiation transport code was used for organ dose ratemore » calculations for photons and the contribution of electrons to skin dose rate was derived using point-kernels. Bremsstrahlung and annihilation photons of positron emission were evaluated as discrete photons. As a result, the coefficients calculated in this work compare favorably to those reported in the US Federal Guidance Report 12 as well as by other authors who employed voxel phantoms for similar exposure scenarios.« less

  5. Effective dose rate coefficients for exposure to contaminated soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veinot, Kenneth G.; Eckerman, Keith F.; Bellamy, Michael B.

    The Oak Ridge National Laboratory Center for Radiation Protection Knowledge has undertaken calculations related to various environmental exposure scenarios. A previous paper reported the results for submersion in radioactive air and immersion in water using age-specific mathematical phantoms. This paper presents age-specific effective dose rate coefficients derived using stylized mathematical phantoms for exposure to contaminated soils. Dose rate coefficients for photon, electron, and positrons of discrete energies were calculated and folded with emissions of 1252 radionuclides addressed in ICRP Publication 107 to determine equivalent and effective dose rate coefficients. The MCNP6 radiation transport code was used for organ dose ratemore » calculations for photons and the contribution of electrons to skin dose rate was derived using point-kernels. Bremsstrahlung and annihilation photons of positron emission were evaluated as discrete photons. As a result, the coefficients calculated in this work compare favorably to those reported in the US Federal Guidance Report 12 as well as by other authors who employed voxel phantoms for similar exposure scenarios.« less

  6. Development of a Spect-Based Three-Dimensional Treatment Planner for Radionuclide Therapy with Iodine -131.

    NASA Astrophysics Data System (ADS)

    Giap, Huan Bosco

    Accurate calculation of absorbed dose to target tumors and normal tissues in the body is an important requirement for establishing fundamental dose-response relationships for radioimmunotherapy. Two major obstacles have been the difficulty in obtaining an accurate patient-specific 3-D activity map in-vivo and calculating the resulting absorbed dose. This study investigated a methodology for 3-D internal dosimetry, which integrates the 3-D biodistribution of the radionuclide acquired from SPECT with a dose-point kernel convolution technique to provide the 3-D distribution of absorbed dose. Accurate SPECT images were reconstructed with appropriate methods for noise filtering, attenuation correction, and Compton scatter correction. The SPECT images were converted into activity maps using a calibration phantom. The activity map was convolved with an ^{131}I dose-point kernel using a 3-D fast Fourier transform to yield a 3-D distribution of absorbed dose. The 3-D absorbed dose map was then processed to provide the absorbed dose distribution in regions of interest. This methodology can provide heterogeneous distributions of absorbed dose in volumes of any size and shape with nonuniform distributions of activity. Comparison of the activities quantitated by our SPECT methodology to true activities in an Alderson abdominal phantom (with spleen, liver, and spherical tumor) yielded errors of -16.3% to 4.4%. Volume quantitation errors ranged from -4.0 to 5.9% for volumes greater than 88 ml. The percentage differences of the average absorbed dose rates calculated by this methodology and the MIRD S-values were 9.1% for liver, 13.7% for spleen, and 0.9% for the tumor. Good agreement (percent differences were less than 8%) was found between the absorbed dose due to penetrating radiation calculated from this methodology and TLD measurement. More accurate estimates of the 3 -D distribution of absorbed dose can be used as a guide in specifying the minimum activity to be administered to patients to deliver a prescribed absorbed dose to tumor without exceeding the toxicity limits of normal tissues.

  7. Use of convolution/superposition-based treatment planning system for dose calculations in the kilovoltage energy range

    NASA Astrophysics Data System (ADS)

    Alaei, Parham

    2000-11-01

    A number of procedures in diagnostic radiology and cardiology make use of long exposures to x rays from fluoroscopy units. Adverse effects of these long exposure times on the patients' skin have been documented in recent years. These include epilation, erythema, and, in severe cases, moist desquamation and tissue necrosis. Potential biological effects from these exposures to other organs include radiation-induced cataracts and pneumonitis. Although there have been numerous studies to measure or calculate the dose to skin from these procedures, there have only been a handful of studies to determine the dose to other organs. Therefore, there is a need for accurate methods to measure the dose in tissues and organs other than the skin. This research was concentrated in devising a method to determine accurately the radiation dose to these tissues and organs. The work was performed in several stages: First, a three dimensional (3D) treatment planning system used in radiation oncology was modified and complemented to make it usable with the low energies of x rays used in diagnostic radiology. Using the system for low energies required generation of energy deposition kernels using Monte Carlo methods. These kernels were generated using the EGS4 Monte Carlo system of codes and added to the treatment planning system. Following modification, the treatment planning system was evaluated for its accuracy of calculations in low energies within homogeneous and heterogeneous media. A study of the effects of lungs and bones on the dose distribution was also performed. The next step was the calculation of dose distributions in humanoid phantoms using this modified system. The system was used to calculate organ doses in these phantoms and the results were compared to those obtained from other methods. These dose distributions can subsequently be used to create dose-volume histograms (DVHs) for internal organs irradiated by these beams. Using this data and the concept of normal tissue complication probability (NTCP) developed for radiation oncology, the risk of future complications in a particular organ can be estimated.

  8. SU-F-T-672: A Novel Kernel-Based Dose Engine for KeV Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinhart, M; Fast, M F; Nill, S

    2016-06-15

    Purpose: Mimicking state-of-the-art patient radiotherapy with high precision irradiators for small animals allows advanced dose-effect studies and radiobiological investigations. One example is the implementation of pre-clinical IMRT-like irradiations, which requires the development of inverse planning for keV photon beams. As a first step, we present a novel kernel-based dose calculation engine for keV x-rays with explicit consideration of energy and material dependencies. Methods: We follow a superposition-convolution approach adapted to keV x-rays, based on previously published work on micro-beam therapy. In small animal radiotherapy, we assume local energy deposition at the photon interaction point, since the electron ranges in tissuemore » are of the same order of magnitude as the voxel size. This allows us to use photon-only kernel sets generated by MC simulations, which are pre-calculated for six energy windows and ten base materials. We validate our stand-alone dose engine against Geant4 MC simulations for various beam configurations in water, slab phantoms with bone and lung inserts, and on a mouse CT with (0.275mm)3 voxels. Results: We observe good agreement for all cases. For field sizes of 1mm{sup 2} to 1cm{sup 2} in water, the depth dose curves agree within 1% (mean), with the largest deviations in the first voxel (4%) and at depths>5cm (<2.5%). The out-of-field doses at 1cm depth agree within 8% (mean) for all but the smallest field size. In slab geometries, the mean agreement was within 3%, with maximum deviations of 8% at water-bone interfaces. The γ-index (1mm/1%) passing rate for a single-field mouse irradiation is 71%. Conclusion: The presented dose engine yields an accurate representation of keV-photon doses suitable for inverse treatment planning for IMRT. It has the potential to become a significantly faster yet sufficiently accurate alternative to full MC simulations. Further investigations will focus on energy sampling as well as calculation times. Research at ICR is also supported by Cancer Research UK under Programme C33589/A19727 and NHS funding to the NIHR Biomedical Research Centre at RMH and ICR. MFF is supported by Cancer Research UK under Programme C33589/A19908.« less

  9. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.

    PubMed

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-21

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  10. Spatial frequency performance limitations of radiation dose optimization and beam positioning

    NASA Astrophysics Data System (ADS)

    Stewart, James M. P.; Stapleton, Shawn; Chaudary, Naz; Lindsay, Patricia E.; Jaffray, David A.

    2018-06-01

    The flexibility and sophistication of modern radiotherapy treatment planning and delivery methods have advanced techniques to improve the therapeutic ratio. Contemporary dose optimization and calculation algorithms facilitate radiotherapy plans which closely conform the three-dimensional dose distribution to the target, with beam shaping devices and image guided field targeting ensuring the fidelity and accuracy of treatment delivery. Ultimately, dose distribution conformity is limited by the maximum deliverable dose gradient; shallow dose gradients challenge techniques to deliver a tumoricidal radiation dose while minimizing dose to surrounding tissue. In this work, this ‘dose delivery resolution’ observation is rigorously formalized for a general dose delivery model based on the superposition of dose kernel primitives. It is proven that the spatial resolution of a delivered dose is bounded by the spatial frequency content of the underlying dose kernel, which in turn defines a lower bound in the minimization of a dose optimization objective function. In addition, it is shown that this optimization is penalized by a dose deposition strategy which enforces a constant relative phase (or constant spacing) between individual radiation beams. These results are further refined to provide a direct, analytic method to estimate the dose distribution arising from the minimization of such an optimization function. The efficacy of the overall framework is demonstrated on an image guided small animal microirradiator for a set of two-dimensional hypoxia guided dose prescriptions.

  11. SU-E-T-439: An Improved Formula of Scatter-To-Primary Ratio for Photon Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T

    2014-06-01

    Purpose: Scatter-to-primary ratio (SPR) is an important dosimetric quantity that describes the contribution from the scatter photons in an external photon beam. The purpose of this study is to develop an improved analytical formula to describe SPR as a function of circular field size (r) and depth (d) using Monte Carlo (MC) simulation. Methods: MC simulation was performed for Mohan photon spectra (Co-60, 4, 6, 10, 15, 23 MV) using EGSNRC code. Point-spread scatter dose kernels in water are generated. The scatter-to-primary ratio (SPR) is also calculated using MC simulation as a function of field size for circular field sizemore » with radius r and depth d. The doses from forward scatter and backscatter photons are calculated using a convolution of the point-spread scatter dose kernel and by accounting for scatter photons contributing to dose before (z'd) reaching the depth of interest, d, where z' is the location of scatter photons, respectively. The depth dependence of the ratio of the forward scatter and backscatter doses is determined as a function of depth and field size. Results: We are able to improve the existing 3-parameter (a, w, d0) empirical formula for SPR by introducing depth dependence for one of the parameter d0, which becomes 0 for deeper depths. The depth dependence of d0 can be directly calculated as a ratio of backscatter-to-forward scatter doses for otherwise the same field and depth. With the improved empirical formula, we can fit SPR for all megavoltage photon beams to within 2%. Existing 3-parameter formula cannot fit SPR data for Co-60 to better than 3.1%. Conclusion: An improved empirical formula is developed to fit SPR for all megavoltage photon energies to within 2%.« less

  12. Theoretical study of the influence of a heterogeneous activity distribution on intratumoral absorbed dose distribution.

    PubMed

    Bao, Ande; Zhao, Xia; Phillips, William T; Woolley, F Ross; Otto, Randal A; Goins, Beth; Hevezi, James M

    2005-01-01

    Radioimmunotherapy of hematopoeitic cancers and micrometastases has been shown to have significant therapeutic benefit. The treatment of solid tumors with radionuclide therapy has been less successful. Previous investigations of intratumoral activity distribution and studies on intratumoral drug delivery suggest that a probable reason for the disappointing results in solid tumor treatment is nonuniform intratumoral distribution coupled with restricted intratumoral drug penetrance, thus inhibiting antineoplastic agents from reaching the tumor's center. This paper describes a nonuniform intratumoral activity distribution identified by limited radiolabeled tracer diffusion from tumor surface to tumor center. This activity was simulated using techniques that allowed the absorbed dose distributions to be estimated using different intratumoral diffusion capabilities and calculated for tumors of varying diameters. The influences of these absorbed dose distributions on solid tumor radionuclide therapy are also discussed. The absorbed dose distribution was calculated using the dose point kernel method that provided for the application of a three-dimensional (3D) convolution between a dose rate kernel function and an activity distribution function. These functions were incorporated into 3D matrices with voxels measuring 0.10 x 0.10 x 0.10 mm3. At this point fast Fourier transform (FFT) and multiplication in frequency domain followed by inverse FFT (iFFT) were used to effect this phase of the dose calculation process. The absorbed dose distribution for tumors of 1, 3, 5, 10, and 15 mm in diameter were studied. Using the therapeutic radionuclides of 131I, 186Re, 188Re, and 90Y, the total average dose, center dose, and surface dose for each of the different tumor diameters were reported. The absorbed dose in the nearby normal tissue was also evaluated. When the tumor diameters exceed 15 mm, a much lower tumor center dose is delivered compared with tumors between 3 and 5 mm in diameter. Based on these findings, the use of higher beta-energy radionuclides, such as 188Re and 90Y is more effective in delivering a higher absorbed dose to the tumor center at tumor diameters around 10 mm.

  13. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    NASA Astrophysics Data System (ADS)

    Nigg, D. W.; Wheeler, F. J.

    1981-01-01

    A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and the capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.

  14. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, D.W.; Wheeler, F.J.

    1981-01-01

    A calculational model is presented to estimate the radiation dose, due to the skyshine effect, in the control room and at the site boundary of the Poloidal Diverter Experiment (PDX) facility at Princeton University which requires substantial radiation shielding. The required composition and thickness of a water-filled roof shield that would reduce this effect to an acceptable level is computed, using an efficient one-dimensional model with an Sn calculation in slab geometry. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab Sn calculation, and themore » capture gamma dose is computed using a simple point-kernel single-scatter method. It is maintained that the slab model provides the exact probability of leakage out the top surface of the roof and that it is nearly as accurate as and much less costly than multi-dimensional techniques.« less

  15. Organ-specific SPECT activity calibration using 3D printed phantoms for molecular radiotherapy dosimetry.

    PubMed

    Robinson, Andrew P; Tipping, Jill; Cullen, David M; Hamilton, David; Brown, Richard; Flynn, Alex; Oldfield, Christopher; Page, Emma; Price, Emlyn; Smith, Andrew; Snee, Richard

    2016-12-01

    Patient-specific absorbed dose calculations for molecular radiotherapy require accurate activity quantification. This is commonly derived from Single-Photon Emission Computed Tomography (SPECT) imaging using a calibration factor relating detected counts to known activity in a phantom insert. A series of phantom inserts, based on the mathematical models underlying many clinical dosimetry calculations, have been produced using 3D printing techniques. SPECT/CT data for the phantom inserts has been used to calculate new organ-specific calibration factors for (99m) Tc and (177)Lu. The measured calibration factors are compared to predicted values from calculations using a Gaussian kernel. Measured SPECT calibration factors for 3D printed organs display a clear dependence on organ shape for (99m) Tc and (177)Lu. The observed variation in calibration factor is reproduced using Gaussian kernel-based calculation over two orders of magnitude change in insert volume for (99m) Tc and (177)Lu. These new organ-specific calibration factors show a 24, 11 and 8 % reduction in absorbed dose for the liver, spleen and kidneys, respectively. Non-spherical calibration factors from 3D printed phantom inserts can significantly improve the accuracy of whole organ activity quantification for molecular radiotherapy, providing a crucial step towards individualised activity quantification and patient-specific dosimetry. 3D printed inserts are found to provide a cost effective and efficient way for clinical centres to access more realistic phantom data.

  16. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy

    NASA Astrophysics Data System (ADS)

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-01

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  17. The effect of CT technical factors on quantification of lung fissure integrity

    NASA Astrophysics Data System (ADS)

    Chong, D.; Brown, M. S.; Ochs, R.; Abtin, F.; Brown, M.; Ordookhani, A.; Shaw, G.; Kim, H. J.; Gjertson, D.; Goldin, J. G.

    2009-02-01

    A new emphysema treatment uses endobronchial valves to perform lobar volume reduction. The degree of fissure completeness may predict treatment efficacy. This study investigated the behavior of a semiautomated algorithm for quantifying lung fissure integrity in CT with respect to reconstruction kernel and dose. Raw CT data was obtained for six asymptomatic patients from a high-risk population for lung cancer. The patients were scanned on either a Siemens Sensation 16 or 64, using a low-dose protocol of 120 kVp, 25 mAs. Images were reconstructed using kernels ranging from smooth to sharp (B10f, B30f, B50f, B70f). Research software was used to simulate an even lower-dose acquisition of 15 mAs, and images were generated at the same kernels resulting in 8 series per patient. The left major fissure was manually contoured axially at regular intervals, yielding 37 contours across all patients. These contours were read into an image analysis and pattern classification system which computed a Fissure Integrity Score (FIS) for each kernel and dose. FIS values were analyzed using a mixed-effects model with kernel and dose as fixed effects and patient as random effect to test for difference due to kernel and dose. Analysis revealed no difference in FIS between the smooth kernels (B10f, B30f) nor between sharp kernels (B50f, B70f), but there was a significant difference between the sharp and smooth groups (p = 0.020). There was no significant difference in FIS between the two low-dose reconstructions (p = 0.882). Using a cutoff of 90%, the number of incomplete fissures increased from 5 to 10 when the imaging protocol changed from B50f to B30f. Reconstruction kernel has a significant effect on quantification of fissure integrity in CT. This has potential implications when selecting patients for endobronchial valve therapy.

  18. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons

    NASA Astrophysics Data System (ADS)

    Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.

    2016-02-01

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  19. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons.

    PubMed

    Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A

    2016-02-21

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  20. Skin dose from radionuclide contamination on clothing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, D.C.; Hussein, E.M.A.; Yuen, P.S.

    1997-06-01

    Skin dose due to radio nuclide contamination on clothing is calculated by Monte Carlo simulation of electron and photon radiation transport. Contamination due to a hot particle on some selected clothing geometries of cotton garment is simulated. The effect of backscattering in the surrounding air is taken into account. For each combination of source-clothing geometry, the dose distribution function in the skin, including the dose at tissue depths of 7 mg cm{sup -2} and 1,000 Mg cm{sup -2}, is calculated by simulating monoenergetic photon and electron sources. Skin dose due to contamination by a radionuclide is then determined by propermore » weighting of & monoenergetic dose distribution functions. The results are compared with the VARSKIN point-kernel code for some radionuclides, indicating that the latter code tends to under-estimate the dose for gamma and high energy beta sources while it overestimates skin dose for low energy beta sources. 13 refs., 4 figs., 2 tabs.« less

  1. Evaluation of the new electron-transport algorithm in MCNP6.1 for the simulation of dose point kernel in water

    NASA Astrophysics Data System (ADS)

    Antoni, Rodolphe; Bourgois, Laurent

    2017-12-01

    In this work, the calculation of specific dose distribution in water is evaluated in MCNP6.1 with the regular condensed history algorithm the "detailed electron energy-loss straggling logic" and the new electrons transport algorithm proposed the "single event algorithm". Dose Point Kernel (DPK) is calculated with monoenergetic electrons of 50, 100, 500, 1000 and 3000 keV for different scoring cells dimensions. A comparison between MCNP6 results and well-validated codes for electron-dosimetry, i.e., EGSnrc or Penelope, is performed. When the detailed electron energy-loss straggling logic is used with default setting (down to the cut-off energy 1 keV), we infer that the depth of the dose peak increases with decreasing thickness of the scoring cell, largely due to combined step-size and boundary crossing artifacts. This finding is less prominent for 500 keV, 1 MeV and 3 MeV dose profile. With an appropriate number of sub-steps (ESTEP value in MCNP6), the dose-peak shift is almost complete absent to 50 keV and 100 keV electrons. However, the dose-peak is more prominent compared to EGSnrc and the absorbed dose tends to be underestimated at greater depths, meaning that boundaries crossing artifact are still occurring while step-size artifacts are greatly reduced. When the single-event mode is used for the whole transport, we observe the good agreement of reference and calculated profile for 50 and 100 keV electrons. Remaining artifacts are fully vanished, showing a possible transport treatment for energies less than a hundred of keV and accordance with reference for whatever scoring cell dimension, even if the single event method initially intended to support electron transport at energies below 1 keV. Conversely, results for 500 keV, 1 MeV and 3 MeV undergo a dramatic discrepancy with reference curves. These poor results and so the current unreliability of the method is for a part due to inappropriate elastic cross section treatment from the ENDF/B-VI.8 library in those energy ranges. Accordingly, special care has to be taken in setting choice for calculating electron dose distribution with MCNP6, in particular with regards to dosimetry or nuclear medicine applications.

  2. Use of computer code for dose distribution studies in A 60CO industrial irradiator

    NASA Astrophysics Data System (ADS)

    Piña-Villalpando, G.; Sloan, D. P.

    1995-09-01

    This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).

  3. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  4. A novel approach to EPID-based 3D volumetric dosimetry for IMRT and VMAT QA

    NASA Astrophysics Data System (ADS)

    Alhazmi, Abdulaziz; Gianoli, Chiara; Neppl, Sebastian; Martins, Juliana; Veloza, Stella; Podesta, Mark; Verhaegen, Frank; Reiner, Michael; Belka, Claus; Parodi, Katia

    2018-06-01

    Intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are relatively complex treatment delivery techniques and require quality assurance (QA) procedures. Pre-treatment dosimetric verification represents a fundamental QA procedure in daily clinical routine in radiation therapy. The purpose of this study is to develop an EPID-based approach to reconstruct a 3D dose distribution as imparted to a virtual cylindrical water phantom to be used for plan-specific pre-treatment dosimetric verification for IMRT and VMAT plans. For each depth, the planar 2D dose distributions acquired in air were back-projected and convolved by depth-specific scatter and attenuation kernels. The kernels were obtained by making use of scatter and attenuation models to iteratively estimate the parameters from a set of reference measurements. The derived parameters served as a look-up table for reconstruction of arbitrary measurements. The summation of the reconstructed 3D dose distributions resulted in the integrated 3D dose distribution of the treatment delivery. The accuracy of the proposed approach was validated in clinical IMRT and VMAT plans by means of gamma evaluation, comparing the reconstructed 3D dose distributions with Octavius measurement. The comparison was carried out using (3%, 3 mm) criteria scoring 99% and 96% passing rates for IMRT and VMAT, respectively. An accuracy comparable to the one of the commercial device for 3D volumetric dosimetry was demonstrated. In addition, five IMRT and five VMAT were validated against the 3D dose calculation performed by the TPS in a water phantom using the same passing rate criteria. The median passing rates within the ten treatment plans was 97.3%, whereas the lowest was 95%. Besides, the reconstructed 3D distribution is obtained without predictions relying on forward dose calculation and without external phantom or dosimetric devices. Thus, the approach provides a fully automated, fast and easy QA procedure for plan-specific pre-treatment dosimetric verification.

  5. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.

  6. A TPS kernel for calculating survival vs. depth: distributions in a carbon radiotherapy beam, based on Katz's cellular Track Structure Theory.

    PubMed

    Waligórski, M P R; Grzanka, L; Korcyl, M; Olko, P

    2015-09-01

    An algorithm was developed of a treatment planning system (TPS) kernel for carbon radiotherapy in which Katz's Track Structure Theory of cellular survival (TST) is applied as its radiobiology component. The physical beam model is based on available tabularised data, prepared by Monte Carlo simulations of a set of pristine carbon beams of different input energies. An optimisation tool developed for this purpose is used to find the composition of pristine carbon beams of input energies and fluences which delivers a pre-selected depth-dose distribution profile over the spread-out Bragg peak (SOBP) region. Using an extrapolation algorithm, energy-fluence spectra of the primary carbon ions and of all their secondary fragments are obtained over regular steps of beam depths. To obtain survival vs. depth distributions, the TST calculation is applied to the energy-fluence spectra of the mixed field of primary ions and of their secondary products at the given beam depths. Katz's TST offers a unique analytical and quantitative prediction of cell survival in such mixed ion fields. By optimising the pristine beam composition to a published depth-dose profile over the SOBP region of a carbon beam and using TST model parameters representing the survival of CHO (Chinese Hamster Ovary) cells in vitro, it was possible to satisfactorily reproduce a published data set of CHO cell survival vs. depth measurements after carbon ion irradiation. The authors also show by a TST calculation that 'biological dose' is neither linear nor additive. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques, Robert; Wong, John; Taylor, Russell

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less

  8. Agreement between gamma passing rates using computed tomography in radiotherapy and secondary cancer risk prediction from more advanced dose calculated models

    PubMed Central

    Balosso, Jacques

    2017-01-01

    Background During the past decades, in radiotherapy, the dose distributions were calculated using density correction methods with pencil beam as type ‘a’ algorithm. The objectives of this study are to assess and evaluate the impact of dose distribution shift on the predicted secondary cancer risk (SCR), using modern advanced dose calculation algorithms, point kernel, as type ‘b’, which consider change in lateral electrons transport. Methods Clinical examples of pediatric cranio-spinal irradiation patients were evaluated. For each case, two radiotherapy treatment plans with were generated using the same prescribed dose to the target resulting in different number of monitor units (MUs) per field. The dose distributions were calculated, respectively, using both algorithms types. A gamma index (γ) analysis was used to compare dose distribution in the lung. The organ equivalent dose (OED) has been calculated with three different models, the linear, the linear-exponential and the plateau dose response curves. The excess absolute risk ratio (EAR) was also evaluated as (EAR = OED type ‘b’ / OED type ‘a’). Results The γ analysis results indicated an acceptable dose distribution agreement of 95% with 3%/3 mm. Although, the γ-maps displayed dose displacement >1 mm around the healthy lungs. Compared to type ‘a’, the OED values from type ‘b’ dose distributions’ were about 8% to 16% higher, leading to an EAR ratio >1, ranged from 1.08 to 1.13 depending on SCR models. Conclusions The shift of dose calculation in radiotherapy, according to the algorithm, can significantly influence the SCR prediction and the plan optimization, since OEDs are calculated from DVH for a specific treatment. The agreement between dose distribution and SCR prediction depends on dose response models and epidemiological data. In addition, the γ passing rates of 3%/3 mm does not translate the difference, up to 15%, in the predictions of SCR resulting from alternative algorithms. Considering that modern algorithms are more accurate, showing more precisely the dose distributions, but that the prediction of absolute SCR is still very imprecise, only the EAR ratio could be used to rank radiotherapy plans. PMID:28811995

  9. Variability in CT lung-nodule quantification: Effects of dose reduction and reconstruction methods on density and texture based features.

    PubMed

    Lo, P; Young, S; Kim, H J; Brown, M S; McNitt-Gray, M F

    2016-08-01

    To investigate the effects of dose level and reconstruction method on density and texture based features computed from CT lung nodules. This study had two major components. In the first component, a uniform water phantom was scanned at three dose levels and images were reconstructed using four conventional filtered backprojection (FBP) and four iterative reconstruction (IR) methods for a total of 24 different combinations of acquisition and reconstruction conditions. In the second component, raw projection (sinogram) data were obtained for 33 lung nodules from patients scanned as a part of their clinical practice, where low dose acquisitions were simulated by adding noise to sinograms acquired at clinical dose levels (a total of four dose levels) and reconstructed using one FBP kernel and two IR kernels for a total of 12 conditions. For the water phantom, spherical regions of interest (ROIs) were created at multiple locations within the water phantom on one reference image obtained at a reference condition. For the lung nodule cases, the ROI of each nodule was contoured semiautomatically (with manual editing) from images obtained at a reference condition. All ROIs were applied to their corresponding images reconstructed at different conditions. For 17 of the nodule cases, repeat contours were performed to assess repeatability. Histogram (eight features) and gray level co-occurrence matrix (GLCM) based texture features (34 features) were computed for all ROIs. For the lung nodule cases, the reference condition was selected to be 100% of clinical dose with FBP reconstruction using the B45f kernel; feature values calculated from other conditions were compared to this reference condition. A measure was introduced, which the authors refer to as Q, to assess the stability of features across different conditions, which is defined as the ratio of reproducibility (across conditions) to repeatability (across repeat contours) of each feature. The water phantom results demonstrated substantial variability among feature values calculated across conditions, with the exception of histogram mean. Features calculated from lung nodules demonstrated similar results with histogram mean as the most robust feature (Q ≤ 1), having a mean and standard deviation Q of 0.37 and 0.22, respectively. Surprisingly, histogram standard deviation and variance features were also quite robust. Some GLCM features were also quite robust across conditions, namely, diff. variance, sum variance, sum average, variance, and mean. Except for histogram mean, all features have a Q of larger than one in at least one of the 3% dose level conditions. As expected, the histogram mean is the most robust feature in their study. The effects of acquisition and reconstruction conditions on GLCM features vary widely, though trending toward features involving summation of product between intensities and probabilities being more robust, barring a few exceptions. Overall, care should be taken into account for variation in density and texture features if a variety of dose and reconstruction conditions are used for the quantification of lung nodules in CT, otherwise changes in quantification results may be more reflective of changes due to acquisition and reconstruction conditions than in the nodule itself.

  10. Dosimetric effects of seed anisotropy and interseed attenuation for {sup 103}Pd and {sup 125}I prostate implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chibani, Omar; Williamson, Jeffrey F.; Todor, Dorin

    2005-08-15

    A Monte Carlo study is carried out to quantify the effects of seed anisotropy and interseed attenuation for {sup 103}Pd and {sup 125}I prostate implants. Two idealized and two real prostate implants are considered. Full Monte Carlo simulation (FMCS) of implants (seeds are physically and simultaneously simulated) is compared with isotropic point-source dose-kernel superposition (PSKS) and line-source dose-kernel superposition (LSKS) methods. For clinical pre- and post-procedure implants, the dose to the different structures (prostate, rectum wall, and urethra) is calculated. The discretized volumes of these structures are reconstructed using transrectal ultrasound contours. Local dose differences (PSKS versus FMCS and LSKSmore » versus FMCS) are investigated. The dose contributions from primary versus scattered photons are calculated separately. For {sup 103}Pd, the average absolute total dose difference between FMCS and PSKS can be as high as 7.4% for the idealized model and 6.1% for the clinical preprocedure implant. Similarly, the total dose difference is lower for the case of {sup 125}I: 4.4% for the idealized model and 4.6% for a clinical post-procedure implant. Average absolute dose differences between LSKS and FMCS are less significant for both seed models: 3 to 3.6% for the idealized models and 2.9 to 3.2% for the clinical plans. Dose differences between PSKS and FMCS are due to the absence of both seed anisotropy and interseed attenuation modeling in the PSKS approach. LSKS accounts for seed anisotropy but not for the interseed effect, leading to systematically overestimated dose values in comparison with the more accurate FMCS method. For both idealized and clinical implants the dose from scattered photons represent less than 1/3 of the total dose. For all studied cases, LSKS prostate DVHs overestimate D{sub 90} by 2 to 5% because of the missing interseed attenuation effect. PSKS and LSKS predictions of V{sub 150} and V{sub 200} are overestimated by up to 9% in comparison with the FMCS results. Finally, effects of seed anisotropy and interseed attenuation must be viewed in the context of other significant sources of dose uncertainty, namely seed orientation, source misplacement, prostate morphological changes and tissue heterogeneity.« less

  11. GRAYSKY-A new gamma-ray skyshine code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witts, D.J.; Twardowski, T.; Watmough, M.H.

    1993-01-01

    This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY aremore » as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors.« less

  12. Calculation of plasma dielectric response in inhomogeneous magnetic field near electron cyclotron resonance

    NASA Astrophysics Data System (ADS)

    Evstatiev, Evstati; Svidzinski, Vladimir; Spencer, Andy; Galkin, Sergei

    2014-10-01

    Full wave 3-D modeling of RF fields in hot magnetized nonuniform plasma requires calculation of nonlocal conductivity kernel describing the dielectric response of such plasma to the RF field. In many cases, the conductivity kernel is a localized function near the test point which significantly simplifies numerical solution of the full wave 3-D problem. Preliminary results of feasibility analysis of numerical calculation of the conductivity kernel in a 3-D hot nonuniform magnetized plasma in the electron cyclotron frequency range will be reported. This case is relevant to modeling of ECRH in ITER. The kernel is calculated by integrating the linearized Vlasov equation along the unperturbed particle's orbits. Particle's orbits in the nonuniform equilibrium magnetic field are calculated numerically by one of the Runge-Kutta methods. RF electric field is interpolated on a specified grid on which the conductivity kernel is discretized. The resulting integrals in the particle's initial velocity and time are then calculated numerically. Different optimization approaches of the integration are tested in this feasibility analysis. Work is supported by the U.S. DOE SBIR program.

  13. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    NASA Astrophysics Data System (ADS)

    Bordes, Julien; Incerti, Sébastien; Lampe, Nathanael; Bardiès, Manuel; Bordage, Marie-Claude

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (;option 2; and its improved version, ;option 4;). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as ;Geant4-DNA-CPA100;. In this study, ;Geant4-DNA-CPA100; was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (;option 2; and ;option 4;), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with ;Geant4-DNA-CPA100; - the first set using Geant4‧s default settings, and the second using CPA100‧s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA's existing models were always broader than those generated with ;Geant4-DNA-CPA100;. The discrepancies observed between the DPKs generated using Geant4-DNA's existing models and ;Geant4-DNA-CPA100; were caused solely by their different cross sections. The different scoring and interpolation methods used in CPA100 and Geant4 to calculate DPKs showed differences close to 3.0% near the source.

  14. Toward a web-based real-time radiation treatment planning system in a cloud computing environment.

    PubMed

    Na, Yong Hum; Suh, Tae-Suk; Kapp, Daniel S; Xing, Lei

    2013-09-21

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an 'on-demand' basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture's constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm(2)) from the Varian TrueBeam(TM) STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.

  15. Toward a web-based real-time radiation treatment planning system in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Hum Na, Yong; Suh, Tae-Suk; Kapp, Daniel S.; Xing, Lei

    2013-09-01

    To exploit the potential dosimetric advantages of intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), an in-depth approach is required to provide efficient computing methods. This needs to incorporate clinically related organ specific constraints, Monte Carlo (MC) dose calculations, and large-scale plan optimization. This paper describes our first steps toward a web-based real-time radiation treatment planning system in a cloud computing environment (CCE). The Amazon Elastic Compute Cloud (EC2) with a master node (named m2.xlarge containing 17.1 GB of memory, two virtual cores with 3.25 EC2 Compute Units each, 420 GB of instance storage, 64-bit platform) is used as the backbone of cloud computing for dose calculation and plan optimization. The master node is able to scale the workers on an ‘on-demand’ basis. MC dose calculation is employed to generate accurate beamlet dose kernels by parallel tasks. The intensity modulation optimization uses total-variation regularization (TVR) and generates piecewise constant fluence maps for each initial beam direction in a distributed manner over the CCE. The optimized fluence maps are segmented into deliverable apertures. The shape of each aperture is iteratively rectified to be a sequence of arcs using the manufacture’s constraints. The output plan file from the EC2 is sent to the simple storage service. Three de-identified clinical cancer treatment plans have been studied for evaluating the performance of the new planning platform with 6 MV flattening filter free beams (40 × 40 cm2) from the Varian TrueBeamTM STx linear accelerator. A CCE leads to speed-ups of up to 14-fold for both dose kernel calculations and plan optimizations in the head and neck, lung, and prostate cancer cases considered in this study. The proposed system relies on a CCE that is able to provide an infrastructure for parallel and distributed computing. The resultant plans from the cloud computing are identical to PC-based IMRT and VMAT plans, confirming the reliability of the cloud computing platform. This cloud computing infrastructure has been established for a radiation treatment planning. It substantially improves the speed of inverse planning and makes future on-treatment adaptive re-planning possible.

  16. Ultralow dose dentomaxillofacial CT imaging and iterative reconstruction techniques: variability of Hounsfield units and contrast-to-noise ratio

    PubMed Central

    Bischel, Alexander; Stratis, Andreas; Kakar, Apoorv; Bosmans, Hilde; Jacobs, Reinhilde; Gassner, Eva-Maria; Puelacher, Wolfgang; Pauwels, Ruben

    2016-01-01

    Objective: The aim of this study was to evaluate whether application of ultralow dose protocols and iterative reconstruction technology (IRT) influence quantitative Hounsfield units (HUs) and contrast-to-noise ratio (CNR) in dentomaxillofacial CT imaging. Methods: A phantom with inserts of five types of materials was scanned using protocols for (a) a clinical reference for navigated surgery (CT dose index volume 36.58 mGy), (b) low-dose sinus imaging (18.28 mGy) and (c) four ultralow dose imaging (4.14, 2.63, 0.99 and 0.53 mGy). All images were reconstructed using: (i) filtered back projection (FBP); (ii) IRT: adaptive statistical iterative reconstruction-50 (ASIR-50), ASIR-100 and model-based iterative reconstruction (MBIR); and (iii) standard (std) and bone kernel. Mean HU, CNR and average HU error after recalibration were determined. Each combination of protocols was compared using Friedman analysis of variance, followed by Dunn's multiple comparison test. Results: Pearson's sample correlation coefficients were all >0.99. Ultralow dose protocols using FBP showed errors of up to 273 HU. Std kernels had less HU variability than bone kernels. MBIR reduced the error value for the lowest dose protocol to 138 HU and retained the highest relative CNR. ASIR could not demonstrate significant advantages over FBP. Conclusions: Considering a potential dose reduction as low as 1.5% of a std protocol, ultralow dose protocols and IRT should be further tested for clinical dentomaxillofacial CT imaging. Advances in knowledge: HU as a surrogate for bone density may vary significantly in CT ultralow dose imaging. However, use of std kernels and MBIR technology reduce HU error values and may retain the highest CNR. PMID:26859336

  17. SU-E-T-36: A GPU-Accelerated Monte-Carlo Dose Calculation Platform and Its Application Toward Validating a ViewRay Beam Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Y; Mazur, T; Green, O

    Purpose: To build a fast, accurate and easily-deployable research platform for Monte-Carlo dose calculations. We port the dose calculation engine PENELOPE to C++, and accelerate calculations using GPU acceleration. Simulations of a Co-60 beam model provided by ViewRay demonstrate the capabilities of the platform. Methods: We built software that incorporates a beam model interface, CT-phantom model, GPU-accelerated PENELOPE engine, and GUI front-end. We rewrote the PENELOPE kernel in C++ (from Fortran) and accelerated the code on a GPU. We seamlessly integrated a Co-60 beam model (obtained from ViewRay) into our platform. Simulations of various field sizes and SSDs using amore » homogeneous water phantom generated PDDs, dose profiles, and output factors that were compared to experiment data. Results: With GPU acceleration using a dated graphics card (Nvidia Tesla C2050), a highly accurate simulation – including 100*100*100 grid, 3×3×3 mm3 voxels, <1% uncertainty, and 4.2×4.2 cm2 field size – runs 24 times faster (20 minutes versus 8 hours) than when parallelizing on 8 threads across a new CPU (Intel i7-4770). Simulated PDDs, profiles and output ratios for the commercial system agree well with experiment data measured using radiographic film or ionization chamber. Based on our analysis, this beam model is precise enough for general applications. Conclusions: Using a beam model for a Co-60 system provided by ViewRay, we evaluate a dose calculation platform that we developed. Comparison to measurements demonstrates the promise of our software for use as a research platform for dose calculations, with applications including quality assurance and treatment plan verification.« less

  18. Dose Calculations for [131I] Meta-Iodobenzylguanidine-Induced Bystander Effects

    PubMed Central

    Gow, M. D.; Seymour, C. B.; Boyd, M.; Mairs, R. J.; Prestiwch, W. V.; Mothersill, C. E.

    2014-01-01

    Targeted radiotherapy is a potentially useful treatment for some cancers and may be potentiated by bystander effects. However, without estimation of absorbed dose, it is difficult to compare the effects with conventional external radiation treatment. Methods: Using the Vynckier – Wambersie dose point kernel, a model for dose rate evaluation was created allowing for calculation of absorbed dose values to two cell lines transfected with the noradrenaline transporter (NAT) gene and treated with [131I]MIBG. Results: The mean doses required to decrease surviving fractions of UVW/NAT and EJ138/NAT cells, which received medium from [131I]MIBG-treated cells, to 25 – 30% were 1.6 and 1.7 Gy respectively. The maximum mean dose rates achieved during [131I]MIBG treatment were 0.09 – 0.75 Gy/h for UVW/NAT and 0.07 – 0.78 Gy/h for EJ138/NAT. These were significantly lower than the external beam gamma radiation dose rate of 15 Gy/h. In the case of control lines which were incapable of [131I]MIBG uptake the mean absorbed doses following radiopharmaceutical were 0.03 – 0.23 Gy for UVW and 0.03 – 0.32 Gy for EJ138. Conclusion: [131I]MIBG treatment for ICCM production elicited a bystander dose-response profile similar to that generated by external beam gamma irradiation but with significantly greater cell death. PMID:24659931

  19. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. New adaptive statistical iterative reconstruction ASiR-V: Assessment of noise performance in comparison to ASiR.

    PubMed

    De Marco, Paolo; Origgi, Daniela

    2018-03-01

    To assess the noise characteristics of the new adaptive statistical iterative reconstruction (ASiR-V) in comparison to ASiR. A water phantom was acquired with common clinical scanning parameters, at five different levels of CTDI vol . Images were reconstructed with different kernels (STD, SOFT, and BONE), different IR levels (40%, 60%, and 100%) and different slice thickness (ST) (0.625 and 2.5 mm), both for ASiR-V and ASiR. Noise properties were investigated and noise power spectrum (NPS) was evaluated. ASiR-V significantly reduced noise relative to FBP: noise reduction was in the range 23%-60% for a 0.625 mm ST and 12%-64% for the 2.5 mm ST. Above 2 mGy, noise reduction for ASiR-V had no dependence on dose. Noise reduction for ASIR-V has dependence on ST, being greater for STD and SOFT kernels at 2.5 mm. For the STD kernel ASiR-V has greater noise reduction for both ST, if compared to ASiR. For the SOFT kernel, results varies according to dose and ST, while for BONE kernel ASIR-V shows less noise reduction. NPS for CT Revolution has dose dependent behavior at lower doses. NPS for ASIR-V and ASiR is similar, showing a shift toward lower frequencies as the IR level increases for STD and SOFT kernels. The NPS is different between ASiR-V and ASIR with BONE kernel. NPS for ASiR-V appears to be ST dependent, having a shift toward lower frequencies for 2.5 mm ST. ASiR-V showed greater noise reduction than ASiR for STD and SOFT kernels, while keeping the same NPS. For the BONE kernel, ASiR-V presents a completely different behavior, with less noise reduction and modified NPS. Noise properties of the ASiR-V are dependent on reconstruction slice thickness. The noise properties of ASiR-V suggest the need for further measurements and efforts to establish new CT protocols to optimize clinical imaging. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  1. Convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation and rate constants: Case study of the spin-boson model.

    PubMed

    Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang

    2018-04-28

    The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.

  2. Convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation and rate constants: Case study of the spin-boson model

    NASA Astrophysics Data System (ADS)

    Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang

    2018-04-01

    The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.

  3. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    NASA Astrophysics Data System (ADS)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  4. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  5. Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5

    NASA Astrophysics Data System (ADS)

    Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.

    2018-02-01

    Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.

  6. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  7. SU-F-T-667: Development and Validation of Dose Calculation for An Open-Source KV Treatment Planning System for Small Animal Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prajapati, S; Mo, X; Bednarz, B

    Purpose: An open-source, convolution/superposition based kV-treatment planning system(TPS) was developed for small animal radiotherapy from previously existed in-house MV-TPS. It is flexible and applicable to both step and shoot and helical tomotherapy treatment delivery. For initial commissioning process, the dose calculation from kV-TPS was compared with measurements and Monte Carlo(MC) simulations. Methods: High resolution, low energy kernels were simulated using EGSnrc user code EDKnrc, which was used as an input in kV-TPS together with MC-simulated x-ray beam spectrum. The Blue Water™ homogeneous phantom (with film inserts) and heterogeneous phantom (with film and TLD inserts) were fabricated. Phantom was placed atmore » 100cm SSD, and was irradiated with 250 kVp beam for 10mins with 1.1cm × 1.1cm open field (at 100cm) created by newly designed binary micro-MLC assembly positioned at 90cm SSD. Gafchromic™ EBT3 film was calibrated in-phantom following AAPM TG-61 guidelines, and were used for measurement at 5 different depths in phantom. Calibrated TLD-100s were obtained from ADCL. EGS and MNCP5 simulation were used to model experimental irradiation set up calculation of dose in phantom. Results: Using the homogeneous phantom, dose difference between film and kV-TPS was calculated: mean(x)=0.9%; maximum difference(MD)=3.1%; standard deviation(σ)=1.1%. Dose difference between MCNP5 and kV-TPS was: x=1.5%; MD=4.6%; σ=1.9%. Dose difference between EGS and kV-TPS was: x=0.8%; MD=1.9%; σ=0.8%. Using the heterogeneous phantom, dose difference between film and kV-TPS was: x=2.6%; MD=3%; σ=1.1%; and dose difference between TLD and kV-TPS was: x=2.9%; MD=6.4%; σ=2.5%. Conclusion: The inhouse, open-source kV-TPS dose calculation system was comparable within 5% of measurements and MC simulations in both homogeneous and heterogeneous phantoms. The dose calculation system of the kV-TPS is validated as a part of initial commissioning process for small animal radiotherapy. The kV-TPS has the potential for accurate dose calculation for any kV treatment or imaging modalities.« less

  8. Calculation of electron and isotopes dose point kernels with fluka Monte Carlo code for dosimetry in nuclear medicine therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botta, F; Di Dia, A; Pedroli, G

    The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK),more » quantifying the energy deposition all around a point isotropic source, is often the one.Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10–3 MeV) and for beta emitting isotopes commonly used for therapy (89Sr, 90Y, 131I, 153Sm, 177Lu, 186Re, and 188Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8·RCSDA and 0.9·RCSDA for monoenergetic electrons (RCSDA being the continuous slowing down approximation range) and within 0.8·X90 and 0.9·X90 for isotopes (X90 being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9·RCSDA and 0.9·X90 for electrons and isotopes, respectively.Results: Concerning monoenergetic electrons, within 0.8·RCSDA (where 90%–97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9·X90, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution.Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less

  9. Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5

    DOE PAGES

    Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.

    2018-02-21

    Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.

  10. Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.

    Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.

  11. Development of low-frequency kernel-function aerodynamics for comparison with time-dependent finite-difference methods

    NASA Technical Reports Server (NTRS)

    Bland, S. R.

    1982-01-01

    Finite difference methods for unsteady transonic flow frequency use simplified equations in which certain of the time dependent terms are omitted from the governing equations. Kernel functions are derived for two dimensional subsonic flow, and provide accurate solutions of the linearized potential equation with the same time dependent terms omitted. These solutions make possible a direct evaluation of the finite difference codes for the linear problem. Calculations with two of these low frequency kernel functions verify the accuracy of the LTRAN2 and HYTRAN2 finite difference codes. Comparisons of the low frequency kernel function results with the Possio kernel function solution of the complete linear equations indicate the adequacy of the HYTRAN approximation for frequencies in the range of interest for flutter calculations.

  12. Benchmarking of MCNP for calculating dose rates at an interim storage facility for nuclear waste.

    PubMed

    Heuel-Fabianek, Burkhard; Hille, Ralf

    2005-01-01

    During the operation of research facilities at Research Centre Jülich, Germany, nuclear waste is stored in drums and other vessels in an interim storage building on-site, which has a concrete shielding at the side walls. Owing to the lack of a well-defined source, measured gamma spectra were unfolded to determine the photon flux on the surface of the containers. The dose rate simulation, including the effects of skyshine, using the Monte Carlo transport code MCNP is compared with the measured dosimetric data at some locations in the vicinity of the interim storage building. The MCNP data for direct radiation confirm the data calculated using a point-kernel method. However, a comparison of the modelled dose rates for direct radiation and skyshine with the measured data demonstrate the need for a more precise definition of the source. Both the measured and the modelled dose rates verified the fact that the legal limits (<1 mSv a(-1)) are met in the area outside the perimeter fence of the storage building to which members of the public have access. Using container surface data (gamma spectra) to define the source may be a useful tool for practical calculations and additionally for benchmarking of computer codes if the discussed critical aspects with respect to the source can be addressed adequately.

  13. Impact of deep learning on the normalization of reconstruction kernel effects in imaging biomarker quantification: a pilot study in CT emphysema

    NASA Astrophysics Data System (ADS)

    Jin, Hyeongmin; Heo, Changyong; Kim, Jong Hyo

    2018-02-01

    Differing reconstruction kernels are known to strongly affect the variability of imaging biomarkers and thus remain as a barrier in translating the computer aided quantification techniques into clinical practice. This study presents a deep learning application to CT kernel conversion which converts a CT image of sharp kernel to that of standard kernel and evaluates its impact on variability reduction of a pulmonary imaging biomarker, the emphysema index (EI). Forty cases of low-dose chest CT exams obtained with 120kVp, 40mAs, 1mm thickness, of 2 reconstruction kernels (B30f, B50f) were selected from the low dose lung cancer screening database of our institution. A Fully convolutional network was implemented with Keras deep learning library. The model consisted of symmetric layers to capture the context and fine structure characteristics of CT images from the standard and sharp reconstruction kernels. Pairs of the full-resolution CT data set were fed to input and output nodes to train the convolutional network to learn the appropriate filter kernels for converting the CT images of sharp kernel to standard kernel with a criterion of measuring the mean squared error between the input and target images. EIs (RA950 and Perc15) were measured with a software package (ImagePrism Pulmo, Seoul, South Korea) and compared for the data sets of B50f, B30f, and the converted B50f. The effect of kernel conversion was evaluated with the mean and standard deviation of pair-wise differences in EI. The population mean of RA950 was 27.65 +/- 7.28% for B50f data set, 10.82 +/- 6.71% for the B30f data set, and 8.87 +/- 6.20% for the converted B50f data set. The mean of pair-wise absolute differences in RA950 between B30f and B50f is reduced from 16.83% to 1.95% using kernel conversion. Our study demonstrates the feasibility of applying the deep learning technique for CT kernel conversion and reducing the kernel-induced variability of EI quantification. The deep learning model has a potential to improve the reliability of imaging biomarker, especially in evaluating the longitudinal changes of EI even when the patient CT scans were performed with different kernels.

  14. Ionization chamber-based reference dosimetry of intensity modulated radiation beams.

    PubMed

    Bouchard, Hugo; Seuntjens, Jan

    2004-09-01

    The present paper addresses reference dose measurements using thimble ionization chambers for quality assurance in IMRT fields. In these radiation fields, detector fluence perturbation effects invalidate the application of open-field dosimetry protocol data for the derivation of absorbed dose to water from ionization chamber measurements. We define a correction factor C(Q)IMRT to correct the absorbed dose to water calibration coefficient N(D, w)Q for fluence perturbation effects in individual segments of an IMRT delivery and developed a calculation method to evaluate the factor. The method consists of precalculating, using accurate Monte Carlo techniques, ionization chamber, type-dependent cavity air dose, and in-phantom dose to water at the reference point for zero-width pencil beams as a function of position of the pencil beams impinging on the phantom surface. These precalculated kernels are convolved with the IMRT fluence distribution to arrive at the dose-to-water-dose-to-cavity air ratio [D(a)w (IMRT)] for IMRT fields and with a 10x10 cm2 open-field fluence to arrive at the same ratio D(a)w (Q) for the 10x10 cm2 reference field. The correction factor C(Q)IMRT is then calculated as the ratio of D(a)w (IMRT) and D(a)w (Q). The calculation method was experimentally validated and the magnitude of chamber correction factors in reference dose measurements in single static and dynamic IMRT fields was studied. The results show that, for thimble-type ionization chambers the correction factor in a single, realistic dynamic IMRT field can be of the order of 10% or more. We therefore propose that for accurate reference dosimetry of complete n-beam IMRT deliveries, ionization chamber fluence perturbation correction factors must explicitly be taken into account.

  15. Model-based versus specific dosimetry in diagnostic context: Comparison of three dosimetric approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcatili, S., E-mail: sara.marcatili@inserm.fr; Villoing, D.; Mauxion, T.

    Purpose: The dosimetric assessment of novel radiotracers represents a legal requirement in most countries. While the techniques for the computation of internal absorbed dose in a therapeutic context have made huge progresses in recent years, in a diagnostic scenario the absorbed dose is usually extracted from model-based lookup tables, most often derived from International Commission on Radiological Protection (ICRP) or Medical Internal Radiation Dose (MIRD) Committee models. The level of approximation introduced by these models may impact the resulting dosimetry. The aim of this work is to establish whether a more refined approach to dosimetry can be implemented in nuclearmore » medicine diagnostics, by analyzing a specific case. Methods: The authors calculated absorbed doses to various organs in six healthy volunteers administered with flutemetamol ({sup 18}F) injection. Each patient underwent from 8 to 10 whole body 3D PET/CT scans. This dataset was analyzed using a Monte Carlo (MC) application developed in-house using the toolkit GATE that is capable to take into account patient-specific anatomy and radiotracer distribution at the voxel level. They compared the absorbed doses obtained with GATE to those calculated with two commercially available software: OLINDA/EXM and STRATOS implementing a dose voxel kernel convolution approach. Results: Absorbed doses calculated with GATE were higher than those calculated with OLINDA. The average ratio between GATE absorbed doses and OLINDA’s was 1.38 ± 0.34 σ (from 0.93 to 2.23). The discrepancy was particularly high for the thyroid, with an average GATE/OLINDA ratio of 1.97 ± 0.83 σ for the six patients. Differences between STRATOS and GATE were found to be higher. The average ratio between GATE and STRATOS absorbed doses was 2.51 ± 1.21 σ (from 1.09 to 6.06). Conclusions: This study demonstrates how the choice of the absorbed dose calculation algorithm may introduce a bias when gamma radiations are of importance, as is the case in nuclear medicine diagnostics.« less

  16. Spectral Kernel Approach to Study Radiative Response of Climate Variables and Interannual Variability of Reflected Solar Spectrum

    NASA Technical Reports Server (NTRS)

    Jin, Zhonghai; Wielicki, Bruce A.; Loukachine, Constantin; Charlock, Thomas P.; Young, David; Noeel, Stefan

    2011-01-01

    The radiative kernel approach provides a simple way to separate the radiative response to different climate parameters and to decompose the feedback into radiative and climate response components. Using CERES/MODIS/Geostationary data, we calculated and analyzed the solar spectral reflectance kernels for various climate parameters on zonal, regional, and global spatial scales. The kernel linearity is tested. Errors in the kernel due to nonlinearity can vary strongly depending on climate parameter, wavelength, surface, and solar elevation; they are large in some absorption bands for some parameters but are negligible in most conditions. The spectral kernels are used to calculate the radiative responses to different climate parameter changes in different latitudes. The results show that the radiative response in high latitudes is sensitive to the coverage of snow and sea ice. The radiative response in low latitudes is contributed mainly by cloud property changes, especially cloud fraction and optical depth. The large cloud height effect is confined to absorption bands, while the cloud particle size effect is found mainly in the near infrared. The kernel approach, which is based on calculations using CERES retrievals, is then tested by direct comparison with spectral measurements from Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) (a different instrument on a different spacecraft). The monthly mean interannual variability of spectral reflectance based on the kernel technique is consistent with satellite observations over the ocean, but not over land, where both model and data have large uncertainty. RMS errors in kernel ]derived monthly global mean reflectance over the ocean compared to observations are about 0.001, and the sampling error is likely a major component.

  17. SU-C-BRC-04: Efficient Dose Calculation Algorithm for FFF IMRT with a Simplified Bivariate Gaussian Source Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, F; Park, J; Barraclough, B

    2016-06-15

    Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less

  18. Postlumpectomy Focal Brachytherapy for Simultaneous Treatment of Surgical Cavity and Draining Lymph Nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hrycushko, Brian A.; Li Shihong; Shi Chengyu

    2011-03-01

    Purpose: The primary objective was to investigate a novel focal brachytherapy technique using lipid nanoparticle (liposome)-carried {beta}-emitting radionuclides (rhenium-186 [{sup 186}Re]/rhenium-188 [{sup 188}Re]) to simultaneously treat the postlumpectomy surgical cavity and draining lymph nodes. Methods and Materials: Cumulative activity distributions in the lumpectomy cavity and lymph nodes were extrapolated from small animal imaging and human lymphoscintigraphy data. Absorbed dose calculations were performed for lumpectomy cavities with spherical and ellipsoidal shapes and lymph nodes within human subjects by use of the dose point kernel convolution method. Results: Dose calculations showed that therapeutic dose levels within the lumpectomy cavity wall can covermore » 2- and 5-mm depths for {sup 186}Re and {sup 188}Re liposomes, respectively. The absorbed doses at 1 cm sharply decreased to only 1.3% to 3.7% of the doses at 2 mm for {sup 186}Re liposomes and 5 mm for {sup 188}Re liposomes. Concurrently, the draining sentinel lymph nodes would receive a high focal therapeutic absorbed dose, whereas the average dose to 1 cm of surrounding tissue received less than 1% of that within the nodes. Conclusions: Focal brachytherapy by use of {sup 186}Re/{sup 188}Re liposomes was theoretically shown to be capable of simultaneously treating the lumpectomy cavity wall and draining sentinel lymph nodes with high absorbed doses while significantly lowering dose to surrounding healthy tissue. In turn, this allows for dose escalation to regions of higher probability of containing residual tumor cells after lumpectomy while reducing normal tissue complications.« less

  19. The effects of food irradiation on quality of pine nut kernels

    NASA Astrophysics Data System (ADS)

    Gölge, Evren; Ova, Gülden

    2008-03-01

    Pine nuts ( Pinus pinae) undergo gamma irradiation process with the doses 0.5, 1.0, 3.0, and 5.0 kGy. The changes in chemical, physical and sensory attributes were observed in the following 3 months of storage period. The data obtained from the experiments showed the peroxide values of the pine nut kernels increased proportionally to the dose. On contrary, irradiation process has no effect on the physical quality such as texture and color, fatty acid composition and sensory attributes.

  20. Preliminary skyshine calculations for the Poloidal Diverter Tokamak Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nigg, D.W.; Wheeler, F.J.

    1981-01-01

    The Poloidal Diverter Experiment (PDX) facility at Princeton University is the first operating tokamak to require substantial radiation shielding. A calculational model has been developed to estimate the radiation dose in the PDX control room and at the site boundary due to the skyshine effect. An efficient one-dimensional method is used to compute the neutron and capture gamma leakage currents at the top surface of the PDX roof shield. This method employs an S /SUB n/ calculation in slab geometry and, for the PDX, is superior to spherical models found in the literature. If certain conditions are met, the slabmore » model provides the exact probability of leakage out the top surface of the roof for fusion source neutrons and for capture gamma rays produced in the PDX floor and roof shield. The model also provides the correct neutron and capture gamma leakage current spectra and angular distributions, averaged over the top roof shield surface. For the PDX, this method is nearly as accurate as multidimensional techniques for computing the roof leakage and is much less costly. The actual neutron skyshine dose is computed using a Monte Carlo model with the neutron source at the roof surface obtained from the slab S /SUB n/ calculation. The capture gamma dose is computed using a simple point-kernel single-scatter method.« less

  1. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    NASA Astrophysics Data System (ADS)

    Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-09-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  2. Generalization of the subsonic kernel function in the s-plane, with applications to flutter analysis

    NASA Technical Reports Server (NTRS)

    Cunningham, H. J.; Desmarais, R. N.

    1984-01-01

    A generalized subsonic unsteady aerodynamic kernel function, valid for both growing and decaying oscillatory motions, is developed and applied in a modified flutter analysis computer program to solve the boundaries of constant damping ratio as well as the flutter boundary. Rates of change of damping ratios with respect to dynamic pressure near flutter are substantially lower from the generalized-kernel-function calculations than from the conventional velocity-damping (V-g) calculation. A rational function approximation for aerodynamic forces used in control theory for s-plane analysis gave rather good agreement with kernel-function results, except for strongly damped motion at combinations of high (subsonic) Mach number and reduced frequency.

  3. Flexibly imposing periodicity in kernel independent FMM: A multipole-to-local operator approach

    NASA Astrophysics Data System (ADS)

    Yan, Wen; Shelley, Michael

    2018-02-01

    An important but missing component in the application of the kernel independent fast multipole method (KIFMM) is the capability for flexibly and efficiently imposing singly, doubly, and triply periodic boundary conditions. In most popular packages such periodicities are imposed with the hierarchical repetition of periodic boxes, which may give an incorrect answer due to the conditional convergence of some kernel sums. Here we present an efficient method to properly impose periodic boundary conditions using a near-far splitting scheme. The near-field contribution is directly calculated with the KIFMM method, while the far-field contribution is calculated with a multipole-to-local (M2L) operator which is independent of the source and target point distribution. The M2L operator is constructed with the far-field portion of the kernel function to generate the far-field contribution with the downward equivalent source points in KIFMM. This method guarantees the sum of the near-field & far-field converge pointwise to results satisfying periodicity and compatibility conditions. The computational cost of the far-field calculation observes the same O (N) complexity as FMM and is designed to be small by reusing the data computed by KIFMM for the near-field. The far-field calculations require no additional control parameters, and observes the same theoretical error bound as KIFMM. We present accuracy and timing test results for the Laplace kernel in singly periodic domains and the Stokes velocity kernel in doubly and triply periodic domains.

  4. Adiabatic-connection fluctuation-dissipation DFT for the structural properties of solids—The renormalized ALDA and electron gas kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patrick, Christopher E., E-mail: chripa@fysik.dtu.dk; Thygesen, Kristian S., E-mail: thygesen@fysik.dtu.dk

    2015-09-14

    We present calculations of the correlation energies of crystalline solids and isolated systems within the adiabatic-connection fluctuation-dissipation formulation of density-functional theory. We perform a quantitative comparison of a set of model exchange-correlation kernels originally derived for the homogeneous electron gas (HEG), including the recently introduced renormalized adiabatic local-density approximation (rALDA) and also kernels which (a) satisfy known exact limits of the HEG, (b) carry a frequency dependence, or (c) display a 1/k{sup 2} divergence for small wavevectors. After generalizing the kernels to inhomogeneous systems through a reciprocal-space averaging procedure, we calculate the lattice constants and bulk moduli of a testmore » set of 10 solids consisting of tetrahedrally bonded semiconductors (C, Si, SiC), ionic compounds (MgO, LiCl, LiF), and metals (Al, Na, Cu, Pd). We also consider the atomization energy of the H{sub 2} molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA’s tendency to overestimate the magnitude of the correlation energy whilst maintaining a high-accuracy description of structural properties.« less

  5. 78 FR 66649 - Spirotetramat; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-06

    ... regulation establishes tolerances for residues of spirotetramat in or on corn, sweet, kernel plus cob with... tolerance for residues of the insecticide spirotetramat in or on corn, sweet kernel plus cob with husks..., calculated as the stoichiometric equivalent of spirotetramat, in or on corn, sweet, kernel plus cob with...

  6. Testing of the analytical anisotropic algorithm for photon dose calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esch, Ann van; Tillikainen, Laura; Pyykkonen, Jukka

    2006-11-15

    The analytical anisotropic algorithm (AAA) was implemented in the Eclipse (Varian Medical Systems) treatment planning system to replace the single pencil beam (SPB) algorithm for the calculation of dose distributions for photon beams. AAA was developed to improve the dose calculation accuracy, especially in heterogeneous media. The total dose deposition is calculated as the superposition of the dose deposited by two photon sources (primary and secondary) and by an electron contamination source. The photon dose is calculated as a three-dimensional convolution of Monte-Carlo precalculated scatter kernels, scaled according to the electron density matrix. For the configuration of AAA, an optimizationmore » algorithm determines the parameters characterizing the multiple source model by optimizing the agreement between the calculated and measured depth dose curves and profiles for the basic beam data. We have combined the acceptance tests obtained in three different departments for 6, 15, and 18 MV photon beams. The accuracy of AAA was tested for different field sizes (symmetric and asymmetric) for open fields, wedged fields, and static and dynamic multileaf collimation fields. Depth dose behavior at different source-to-phantom distances was investigated. Measurements were performed on homogeneous, water equivalent phantoms, on simple phantoms containing cork inhomogeneities, and on the thorax of an anthropomorphic phantom. Comparisons were made among measurements, AAA, and SPB calculations. The optimization procedure for the configuration of the algorithm was successful in reproducing the basic beam data with an overall accuracy of 3%, 1 mm in the build-up region, and 1%, 1 mm elsewhere. Testing of the algorithm in more clinical setups showed comparable results for depth dose curves, profiles, and monitor units of symmetric open and wedged beams below d{sub max}. The electron contamination model was found to be suboptimal to model the dose around d{sub max}, especially for physical wedges at smaller source to phantom distances. For the asymmetric field verification, absolute dose difference of up to 4% were observed for the most extreme asymmetries. Compared to the SPB, the penumbra modeling is considerably improved (1%, 1 mm). At the interface between solid water and cork, profiles show a better agreement with AAA. Depth dose curves in the cork are substantially better with AAA than with SPB. Improvements are more pronounced for 18 MV than for 6 MV. Point dose measurements in the thoracic phantom are mostly within 5%. In general, we can conclude that, compared to SPB, AAA improves the accuracy of dose calculations. Particular progress was made with respect to the penumbra and low dose regions. In heterogeneous materials, improvements are substantial and more pronounced for high (18 MV) than for low (6 MV) energies.« less

  7. Monte Carlo calculations of energy deposition distributions of electrons below 20 keV in protein.

    PubMed

    Tan, Zhenyu; Liu, Wei

    2014-05-01

    The distributions of energy depositions of electrons in semi-infinite bulk protein and the radial dose distributions of point-isotropic mono-energetic electron sources [i.e., the so-called dose point kernel (DPK)] in protein have been systematically calculated in the energy range below 20 keV, based on Monte Carlo methods. The ranges of electrons have been evaluated by extrapolating two calculated distributions, respectively, and the evaluated ranges of electrons are compared with the electron mean path length in protein which has been calculated by using electron inelastic cross sections described in this work in the continuous-slowing-down approximation. It has been found that for a given energy, the electron mean path length is smaller than the electron range evaluated from DPK, but it is large compared to the electron range obtained from the energy deposition distributions of electrons in semi-infinite bulk protein. The energy dependences of the extrapolated electron ranges based on the two investigated distributions are given, respectively, in a power-law form. In addition, the DPK in protein has also been compared with that in liquid water. An evident difference between the two DPKs is observed. The calculations presented in this work may be useful in studies of radiation effects on proteins.

  8. TH-E-BRE-07: Development of Dose Calculation Error Predictors for a Widely Implemented Clinical Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egan, A; Laub, W

    2014-06-15

    Purpose: Several shortcomings of the current implementation of the analytic anisotropic algorithm (AAA) may lead to dose calculation errors in highly modulated treatments delivered to highly heterogeneous geometries. Here we introduce a set of dosimetric error predictors that can be applied to a clinical treatment plan and patient geometry in order to identify high risk plans. Once a problematic plan is identified, the treatment can be recalculated with more accurate algorithm in order to better assess its viability. Methods: Here we focus on three distinct sources dosimetric error in the AAA algorithm. First, due to a combination of discrepancies inmore » smallfield beam modeling as well as volume averaging effects, dose calculated through small MLC apertures can be underestimated, while that behind small MLC blocks can overestimated. Second, due the rectilinear scaling of the Monte Carlo generated pencil beam kernel, energy is not properly transported through heterogeneities near, but not impeding, the central axis of the beamlet. And third, AAA overestimates dose in regions very low density (< 0.2 g/cm{sup 3}). We have developed an algorithm to detect the location and magnitude of each scenario within the patient geometry, namely the field-size index (FSI), the heterogeneous scatter index (HSI), and the lowdensity index (LDI) respectively. Results: Error indices successfully identify deviations between AAA and Monte Carlo dose distributions in simple phantom geometries. Algorithms are currently implemented in the MATLAB computing environment and are able to run on a typical RapidArc head and neck geometry in less than an hour. Conclusion: Because these error indices successfully identify each type of error in contrived cases, with sufficient benchmarking, this method can be developed into a clinical tool that may be able to help estimate AAA dose calculation errors and when it might be advisable to use Monte Carlo calculations.« less

  9. Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.

    PubMed

    van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim

    2018-05-21

    Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.

  10. Reconstruction of noisy and blurred images using blur kernel

    NASA Astrophysics Data System (ADS)

    Ellappan, Vijayan; Chopra, Vishal

    2017-11-01

    Blur is a common in so many digital images. Blur can be caused by motion of the camera and scene object. In this work we proposed a new method for deblurring images. This work uses sparse representation to identify the blur kernel. By analyzing the image coordinates Using coarse and fine, we fetch the kernel based image coordinates and according to that observation we get the motion angle of the shaken or blurred image. Then we calculate the length of the motion kernel using radon transformation and Fourier for the length calculation of the image and we use Lucy Richardson algorithm which is also called NON-Blind(NBID) Algorithm for more clean and less noisy image output. All these operation will be performed in MATLAB IDE.

  11. Absorbed dose evaluation of Auger electron-emitting radionuclides: impact of input decay spectra on dose point kernels and S-values

    NASA Astrophysics Data System (ADS)

    Falzone, Nadia; Lee, Boon Q.; Fernández-Varea, José M.; Kartsonaki, Christiana; Stuchbery, Andrew E.; Kibédi, Tibor; Vallis, Katherine A.

    2017-03-01

    The aim of this study was to investigate the impact of decay data provided by the newly developed stochastic atomic relaxation model BrIccEmis on dose point kernels (DPKs - radial dose distribution around a unit point source) and S-values (absorbed dose per unit cumulated activity) of 14 Auger electron (AE) emitting radionuclides, namely 67Ga, 80mBr, 89Zr, 90Nb, 99mTc, 111In, 117mSn, 119Sb, 123I, 124I, 125I, 135La, 195mPt and 201Tl. Radiation spectra were based on the nuclear decay data from the medical internal radiation dose (MIRD) RADTABS program and the BrIccEmis code, assuming both an isolated-atom and condensed-phase approach. DPKs were simulated with the PENELOPE Monte Carlo (MC) code using event-by-event electron and photon transport. S-values for concentric spherical cells of various sizes were derived from these DPKs using appropriate geometric reduction factors. The number of Auger and Coster-Kronig (CK) electrons and x-ray photons released per nuclear decay (yield) from MIRD-RADTABS were consistently higher than those calculated using BrIccEmis. DPKs for the electron spectra from BrIccEmis were considerably different from MIRD-RADTABS in the first few hundred nanometres from a point source where most of the Auger electrons are stopped. S-values were, however, not significantly impacted as the differences in DPKs in the sub-micrometre dimension were quickly diminished in larger dimensions. Overestimation in the total AE energy output by MIRD-RADTABS leads to higher predicted energy deposition by AE emitting radionuclides, especially in the immediate vicinity of the decaying radionuclides. This should be taken into account when MIRD-RADTABS data are used to simulate biological damage at nanoscale dimensions.

  12. Pediatric dosimetry for intrapleural lung injections of 32P chromic phosphate

    NASA Astrophysics Data System (ADS)

    Konijnenberg, Mark W.; Olch, Arthur

    2010-10-01

    Intracavitary injections of 32P chromic phosphate are used in the therapy of pleuropulmonary blastoma and pulmonary sarcomas in children. The lung dose, however, has never been calculated despite the potential risk of lung toxicity from treatment. In this work the dosimetry has been calculated in target tissue and lung for pediatric phantoms. Pleural cavities were modeled in the Monte Carlo code MCNP within the pediatric MIRD phantoms. Both the depth-dose curves in the pleural lining and into the lung as well as 3D dose distributions were calculated for either homogeneous or inhomogeneous 32P activity distributions. Dose-volume histograms for the lung tissue and isodose graphs were generated. The results for the 2D depth-dose curve to the pleural lining and tumor around the pleural cavity correspond well with the point kernel model-based recommendations. With a 2 mm thick pleural lining, one-third of the lung parenchyma volume gets a dose more than 30 Gy (V30) for 340 MBq 32P in a 10 year old. This is close to lung tolerance. Younger children will receive a larger dose to the lung when the lung density remains equal to the adult value; the V30 relative lung volume for a 5 year old is 35% at an activity of 256 MBq and for a 1 year old 165 MBq yields a V30 of 43%. At higher densities of the lung tissue V30 stays below 32%. All activities yield a therapeutic dose of at least 225 Gy in the pleural lining. With a more normal pleural lining thickness (0.5 mm instead of 2 mm) the injected activities will have to be reduced by a factor 5 to obtain tolerable lung doses in pediatric patients. Previous dosimetry recommendations for the adult apply well down to lung surface areas of 400 cm2. Monte Carlo dosimetry quantitates the three-dimensional dose distribution, providing a better insight into the maximum tolerable activity for this therapy.

  13. Modulation of antioxidant potential in liver of mice by kernel oil of cashew nut (Anacardium occidentale) and its lack of tumour promoting ability in DMBA induced skin papillomagenesis.

    PubMed

    Singh, Bimala; Kale, R K; Rao, A R

    2004-04-01

    Cashew nut shell oil has been reported to possess tumour promoting property. Therefore an attempt has been made to study the modulatory effect of cashew nut (Anlacardium occidentale) kernel oil on antioxidant potential in liver of Swiss albino mice and also to see whether it has tumour promoting ability like the shell oil. The animals were treated orally with two doses (50 and 100 microl/animal/day) of kernel oil of cashew nut for 10 days. The kernel oil was found to enhance the specific activities of SOD, catalase, GST, methylglyoxalase I and levels of GSH. These results suggested that cashew nut kernel oil had an ability to increase the antioxidant status of animals. The decreased level of lipid peroxidation supported this possibility. The tumour promoting property of the kernel oil was also examined and found that cashew nut kernel oil did not exhibit any solitary carcinogenic activity.

  14. TMD splitting functions in [Formula: see text] factorization: the real contribution to the gluon-to-gluon splitting.

    PubMed

    Hentschinski, M; Kusina, A; Kutak, K; Serino, M

    2018-01-01

    We calculate the transverse momentum dependent gluon-to-gluon splitting function within [Formula: see text]-factorization, generalizing the framework employed in the calculation of the quark splitting functions in Hautmann et al. (Nucl Phys B 865:54-66, arXiv:1205.1759, 2012), Gituliar et al. (JHEP 01:181, arXiv:1511.08439, 2016), Hentschinski et al. (Phys Rev D 94(11):114013, arXiv:1607.01507, 2016) and demonstrate at the same time the consistency of the extended formalism with previous results. While existing versions of [Formula: see text] factorized evolution equations contain already a gluon-to-gluon splitting function i.e. the leading order Balitsky-Fadin-Kuraev-Lipatov (BFKL) kernel or the Ciafaloni-Catani-Fiorani-Marchesini (CCFM) kernel, the obtained splitting function has the important property that it reduces both to the leading order BFKL kernel in the high energy limit, to the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) gluon-to-gluon splitting function in the collinear limit as well as to the CCFM kernel in the soft limit. At the same time we demonstrate that this splitting kernel can be obtained from a direct calculation of the QCD Feynman diagrams, based on a combined implementation of the Curci-Furmanski-Petronzio formalism for the calculation of the collinear splitting functions and the framework of high energy factorization.

  15. Point kernel calculations of skyshine exposure rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roseberry, M.L.; Shultis, J.K.

    1982-02-01

    A simple point kernel model is presented for the calculation of skyshine exposure rates arising from the atmospheric reflection of gamma radiation produced by a vertically collimated or a shielded point source. This model is shown to be in good agreement with benchmark experimental data from a /sup 60/Co source for distances out to 700 m.

  16. Sensitivities Kernels of Seismic Traveltimes and Amplitudes for Quality Factor and Boundary Topography

    NASA Astrophysics Data System (ADS)

    Hsieh, M.; Zhao, L.; Ma, K.

    2010-12-01

    Finite-frequency approach enables seismic tomography to fully utilize the spatial and temporal distributions of the seismic wavefield to improve resolution. In achieving this goal, one of the most important tasks is to compute efficiently and accurately the (Fréchet) sensitivity kernels of finite-frequency seismic observables such as traveltime and amplitude to the perturbations of model parameters. In scattering-integral approach, the Fréchet kernels are expressed in terms of the strain Green tensors (SGTs), and a pre-established SGT database is necessary to achieve practical efficiency for a three-dimensional reference model in which the SGTs must be calculated numerically. Methods for computing Fréchet kernels for seismic velocities have long been established. In this study, we develop algorithms based on the finite-difference method for calculating Fréchet kernels for the quality factor Qμ and seismic boundary topography. Kernels for the quality factor can be obtained in a way similar to those for seismic velocities with the help of the Hilbert transform. The effects of seismic velocities and quality factor on either traveltime or amplitude are coupled. Kernels for boundary topography involve spatial gradient of the SGTs and they also exhibit interesting finite-frequency characteristics. Examples of quality factor and boundary topography kernels will be shown for a realistic model for the Taiwan region with three-dimensional velocity variation as well as surface and Moho discontinuity topography.

  17. Magnetic field influences on the lateral dose response functions of photon-beam detectors: MC study of wall-less water-filled detectors with various densities.

    PubMed

    Looe, Hui Khee; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2017-06-21

    The distortion of detector reading profiles across photon beams in the presence of magnetic fields is a developing subject of clinical photon-beam dosimetry. The underlying modification by the Lorentz force of a detector's lateral dose response function-the convolution kernel transforming the true cross-beam dose profile in water into the detector reading profile-is here studied for the first time. The three basic convolution kernels, the photon fluence response function, the dose deposition kernel, and the lateral dose response function, of wall-less cylindrical detectors filled with water of low, normal and enhanced density are shown by Monte Carlo simulation to be distorted in the prevailing direction of the Lorentz force. The asymmetric shape changes of these convolution kernels in a water medium and in magnetic fields of up to 1.5 T are confined to the lower millimetre range, and they depend on the photon beam quality, the magnetic flux density and the detector's density. The impact of this distortion on detector reading profiles is demonstrated using a narrow photon beam profile. For clinical applications it appears as favourable that the magnetic flux density dependent distortion of the lateral dose response function, as far as secondary electron transport is concerned, vanishes in the case of water-equivalent detectors of normal water density. By means of secondary electron history backtracing, the spatial distribution of the photon interactions giving rise either directly to secondary electrons or to scattered photons further downstream producing secondary electrons which contribute to the detector's signal, and their lateral shift due to the Lorentz force is elucidated. Electron history backtracing also serves to illustrate the correct treatment of the influences of the Lorentz force in the EGSnrc Monte Carlo code applied in this study.

  18. Linked-cluster formulation of electron-hole interaction kernel in real-space representation without using unoccupied states.

    PubMed

    Bayne, Michael G; Scher, Jeremy A; Ellis, Benjamin H; Chakraborty, Arindam

    2018-05-21

    Electron-hole or quasiparticle representation plays a central role in describing electronic excitations in many-electron systems. For charge-neutral excitation, the electron-hole interaction kernel is the quantity of interest for calculating important excitation properties such as optical gap, optical spectra, electron-hole recombination and electron-hole binding energies. The electron-hole interaction kernel can be formally derived from the density-density correlation function using both Green's function and TDDFT formalism. The accurate determination of the electron-hole interaction kernel remains a significant challenge for precise calculations of optical properties in the GW+BSE formalism. From the TDDFT perspective, the electron-hole interaction kernel has been viewed as a path to systematic development of frequency-dependent exchange-correlation functionals. Traditional approaches, such as MBPT formalism, use unoccupied states (which are defined with respect to Fermi vacuum) to construct the electron-hole interaction kernel. However, the inclusion of unoccupied states has long been recognized as the leading computational bottleneck that limits the application of this approach for larger finite systems. In this work, an alternative derivation that avoids using unoccupied states to construct the electron-hole interaction kernel is presented. The central idea of this approach is to use explicitly correlated geminal functions for treating electron-electron correlation for both ground and excited state wave functions. Using this ansatz, it is derived using both diagrammatic and algebraic techniques that the electron-hole interaction kernel can be expressed only in terms of linked closed-loop diagrams. It is proved that the cancellation of unlinked diagrams is a consequence of linked-cluster theorem in real-space representation. The electron-hole interaction kernel derived in this work was used to calculate excitation energies in many-electron systems and results were found to be in good agreement with the EOM-CCSD and GW+BSE methods. The numerical results highlight the effectiveness of the developed method for overcoming the computational barrier of accurately determining the electron-hole interaction kernel to applications of large finite systems such as quantum dots and nanorods.

  19. A mathematical deconvolution formulation for superficial dose distribution measurement by Cerenkov light dosimetry.

    PubMed

    Brost, Eric Edward; Watanabe, Yoichi

    2018-06-01

    Cerenkov photons are created by high-energy radiation beams used for radiation therapy. In this study, we developed a Cerenkov light dosimetry technique to obtain a two-dimensional dose distribution in a superficial region of medium from the images of Cerenkov photons by using a deconvolution method. An integral equation was derived to represent the Cerenkov photon image acquired by a camera for a given incident high-energy photon beam by using convolution kernels. Subsequently, an equation relating the planar dose at a depth to a Cerenkov photon image using the well-known relationship between the incident beam fluence and the dose distribution in a medium was obtained. The final equation contained a convolution kernel called the Cerenkov dose scatter function (CDSF). The CDSF function was obtained by deconvolving the Cerenkov scatter function (CSF) with the dose scatter function (DSF). The GAMOS (Geant4-based Architecture for Medicine-Oriented Simulations) Monte Carlo particle simulation software was used to obtain the CSF and DSF. The dose distribution was calculated from the Cerenkov photon intensity data using an iterative deconvolution method with the CDSF. The theoretical formulation was experimentally evaluated by using an optical phantom irradiated by high-energy photon beams. The intensity of the deconvolved Cerenkov photon image showed linear dependence on the dose rate and the photon beam energy. The relative intensity showed a field size dependence similar to the beam output factor. Deconvolved Cerenkov images showed improvement in dose profiles compared with the raw image data. In particular, the deconvolution significantly improved the agreement in the high dose gradient region, such as in the penumbra. Deconvolution with a single iteration was found to provide the most accurate solution of the dose. Two-dimensional dose distributions of the deconvolved Cerenkov images agreed well with the reference distributions for both square fields and a multileaf collimator (MLC) defined, irregularly shaped field. The proposed technique improved the accuracy of the Cerenkov photon dosimetry in the penumbra region. The results of this study showed initial validation of the deconvolution method for beam profile measurements in a homogeneous media. The new formulation accounted for the physical processes of Cerenkov photon transport in the medium more accurately than previously published methods. © 2018 American Association of Physicists in Medicine.

  20. Development of activity pencil beam algorithm using measured distribution data of positron emitter nuclei generated by proton irradiation of targets containing (12)C, (16)O, and (40)Ca nuclei in preparation of clinical application.

    PubMed

    Miyatake, Aya; Nishio, Teiji; Ogino, Takashi

    2011-10-01

    The purpose of this study is to develop a new calculation algorithm that is satisfactory in terms of the requirements for both accuracy and calculation time for a simulation of imaging of the proton-irradiated volume in a patient body in clinical proton therapy. The activity pencil beam algorithm (APB algorithm), which is a new technique to apply the pencil beam algorithm generally used for proton dose calculations in proton therapy to the calculation of activity distributions, was developed as a calculation algorithm of the activity distributions formed by positron emitter nuclei generated from target nuclear fragment reactions. In the APB algorithm, activity distributions are calculated using an activity pencil beam kernel. In addition, the activity pencil beam kernel is constructed using measured activity distributions in the depth direction and calculations in the lateral direction. (12)C, (16)O, and (40)Ca nuclei were determined as the major target nuclei that constitute a human body that are of relevance for calculation of activity distributions. In this study, "virtual positron emitter nuclei" was defined as the integral yield of various positron emitter nuclei generated from each target nucleus by target nuclear fragment reactions with irradiated proton beam. Compounds, namely, polyethylene, water (including some gelatin) and calcium oxide, which contain plenty of the target nuclei, were irradiated using a proton beam. In addition, depth activity distributions of virtual positron emitter nuclei generated in each compound from target nuclear fragment reactions were measured using a beam ON-LINE PET system mounted a rotating gantry port (BOLPs-RGp). The measured activity distributions depend on depth or, in other words, energy. The irradiated proton beam energies were 138, 179, and 223 MeV, and measurement time was about 5 h until the measured activity reached the background level. Furthermore, the activity pencil beam data were made using the activity pencil beam kernel, which was composed of the measured depth data and the lateral data including multiple Coulomb scattering approximated by the Gaussian function, and were used for calculating activity distributions. The data of measured depth activity distributions for every target nucleus by proton beam energy were obtained using BOLPs-RGp. The form of the depth activity distribution was verified, and the data were made in consideration of the time-dependent change of the form. Time dependence of an activity distribution form could be represented by two half-lives. Gaussian form of the lateral distribution of the activity pencil beam kernel was decided by the effect of multiple Coulomb scattering. Thus, the data of activity pencil beam involving time dependence could be obtained in this study. The simulation of imaging of the proton-irradiated volume in a patient body using target nuclear fragment reactions was feasible with the developed APB algorithm taking time dependence into account. With the use of the APB algorithm, it was suggested that a system of simulation of activity distributions that has levels of both accuracy and calculation time appropriate for clinical use can be constructed.

  1. Study on Energy Productivity Ratio (EPR) at palm kernel oil processing factory: case study on PT-X at Sumatera Utara Plantation

    NASA Astrophysics Data System (ADS)

    Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.

    2018-02-01

    The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.

  2. Praseodymium-142 glass seeds for the brachytherapy of prostate cancer

    NASA Astrophysics Data System (ADS)

    Jung, Jae Won

    A beta-emitting glass seed was proposed for the brachytherapy treatment of prostate cancer. Criteria for seed design were derived and several beta-emitting nuclides were examined for suitability. 142Pr was selected as the isotope of choice. Seeds 0.08 cm in diameter and 0.9 cm long were manufactured for testing. The seeds were activated in the Texas A&M University research reactor. The activity produced was as expected when considering the meta-stable state and epi-thermal neutron flux. The MCNP5 Monte Carlo code was used to calculate the quantitative dosimetric parameters suggested in the American Association of Physicists in Medicine (AAPM) TG-43/60. The Monte Carlo calculation results were compared with those from a dose point kernel code. The dose profiles agree well with each other. The gamma dose of 142Pr was evaluated. The gamma dose is 0.3 Gy at 1.0 cm with initial activity of 5.95 mCi and is insignificant to other organs. Measurements were performed to assess the 2-dimensional axial dose distributions using Gafchromic radiochromic film. The radiochromic film was calibrated using an X-ray machine calibrated against a National Institute of Standards and Technology (NIST) traceable ion chamber. A calibration curve was derived using a least squares fit of a second order polynomial. The measured dose distribution agrees well with results from the Monte Carlo simulation. The dose was 130.8 Gy at 6 mm from the seed center with initial activity of 5.95 mCi. AAPM TG-43/60 parameters were determined. The reference dose rate for 2 mm and 6 mm were 0.67 and 0.02 cGy/s/mCi, respectively. The geometry function, radial dose function and anisotropy function were generated.

  3. KINETIC-J: A computational kernel for solving the linearized Vlasov equation applied to calculations of the kinetic, configuration space plasma current for time harmonic wave electric fields

    NASA Astrophysics Data System (ADS)

    Green, David L.; Berry, Lee A.; Simpson, Adam B.; Younkin, Timothy R.

    2018-04-01

    We present the KINETIC-J code, a computational kernel for evaluating the linearized Vlasov equation with application to calculating the kinetic plasma response (current) to an applied time harmonic wave electric field. This code addresses the need for a configuration space evaluation of the plasma current to enable kinetic full-wave solvers for waves in hot plasmas to move beyond the limitations of the traditional Fourier spectral methods. We benchmark the kernel via comparison with the standard k →-space forms of the hot plasma conductivity tensor.

  4. Treatment planning for internal emitter therapy: Methods, applications and clinical implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sgouros, G.

    1999-01-01

    Treatment planning involves three basic steps: (1) a procedure must be devised that will provide the most relevant information, (2) the procedure must be applied and (3) the resulting information must be translated into a definition of the optimum implementation. There are varying degrees of treatment planning that may be implemented in internal emitter therapy. As in chemotherapy, the information from a Phase 1 study may be used to treat patients based upon body surface area. If treatment planning is included on a patient-specific basis, a pretherapy, trace-labeled, administration of the radiopharmaceutical is generally required. The data collected following themore » tracer dose may range from time-activity curves of blood and whole-body for use in blood, marrow or total body absorbed dose estimation to patient imaging for three-dimensional internal emitter dosimetry. The most ambitious approach requires a three-dimensional set of images representing radionuclide distribution (SPECT or PET) and a corresponding set of images representing anatomy (CT or MRI). The absorbed dose (or dose-rate) distribution may be obtained by convolution of a point kernel with the radioactivity distribution or by direct Monte Carlo calculation. A critical requirement for both techniques is the development of an overall structure that makes it possible, in a routine manner, to input the images, to identify the structures of interest and to display the results of the dose calculations in a clinically relevant manner. 52 refs., 4 figs., 1 tab.« less

  5. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  6. SU-D-201-02: Prediction of Delivered Dose Based On a Joint Histogram of CT and FDG PET Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, M; Choi, Y; Cho, A

    2015-06-15

    Purpose: To investigate whether pre-treatment images can be used in predicting microsphere distribution in tumors. When intra-arterial radioembolization using Y90 microspheres was performed, the microspheres were often delivered non-uniformly within the tumor, which could lead to an inefficient therapy. Therefore, it is important to estimate the distribution of microspheres. Methods: Early arterial phase CT and FDG PET images were acquired for patients with primary liver cancer prior to radioembolization (RE) using Y90 microspheres. Tumor volume was delineated on CT images and fused with FDG PET images. From each voxel (3.9×3.9×3.3 mm3) in the tumor, the Hounsfield unit (HU) from themore » CT and SUV values from the FDG PET were harvested. We binned both HU and SUV into 11 bins and then calculated a normalized joint-histogram in an 11×11 array.Patients also underwent a post-treatment Y90 PET imaging. Radiation dose for the tumor was estimated using convolution of the Y90 distribution with a dose-point kernel. We also calculated a fraction of the tumor volume that received a radiation dose great than 100Gy. Results: Averaged over 40 patients, 55% of tumor volume received a dose greater than 100Gy (range : 1.1 – 100%). The width of the joint histogram was narrower for patients with a high dose. For patients with a low dose, the width was wider and a larger fraction of tumor volume had low HU. Conclusion: We have shown the pattern of joint histogram of the HU and SUV depends on delivered dose. The patterns can predict the efficacy of uniform intra-arterial delivery of Y90 microspheres.« less

  7. MPACT Subgroup Self-Shielding Efficiency Improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane; Liu, Yuxuan; Collins, Benjamin S.

    Recent developments to improve the efficiency of the MOC solvers in MPACT have yielded effective kernels that loop over several energy groups at once, rather that looping over one group at a time. These kernels have produced roughly a 2x speedup on the MOC sweeping time during eigenvalue calculation. However, the self-shielding subgroup calculation had not been reevaluated to take advantage of these new kernels, which typically requires substantial solve time. The improvements covered in this report start by integrating the multigroup kernel concepts into the subgroup calculation, which are then used as the basis for further extensions. The nextmore » improvement that is covered is what is currently being termed as “Lumped Parameter MOC”. Because the subgroup calculation is a purely fixed source problem and multiple sweeps are performed only to update the boundary angular fluxes, the sweep procedure can be condensed to allow for the instantaneous propagation of the flux across a spatial domain, without the need to sweep along all segments in a ray. Once the boundary angular fluxes are considered to be converged, an additional sweep that will tally the scalar flux is completed. The last improvement that is investigated is the possible reduction of the number of azimuthal angles per octant in the shielding sweep. Typically 16 azimuthal angles per octant are used for self-shielding and eigenvalue calculations, but it is possible that the self-shielding sweeps are less sensitive to the number of angles than the full eigenvalue calculation.« less

  8. Patient-specific Monte Carlo-based dose-kernel approach for inverse planning in afterloading brachytherapy.

    PubMed

    D'Amours, Michel; Pouliot, Jean; Dagnault, Anne; Verhaegen, Frank; Beaulieu, Luc

    2011-12-01

    Brachytherapy planning software relies on the Task Group report 43 dosimetry formalism. This formalism, based on a water approximation, neglects various heterogeneous materials present during treatment. Various studies have suggested that these heterogeneities should be taken into account to improve the treatment quality. The present study sought to demonstrate the feasibility of incorporating Monte Carlo (MC) dosimetry within an inverse planning algorithm to improve the dose conformity and increase the treatment quality. The method was based on precalculated dose kernels in full patient geometries, representing the dose distribution of a brachytherapy source at a single dwell position using MC simulations and the Geant4 toolkit. These dose kernels are used by the inverse planning by simulated annealing tool to produce a fast MC-based plan. A test was performed for an interstitial brachytherapy breast treatment using two different high-dose-rate brachytherapy sources: the microSelectron iridium-192 source and the electronic brachytherapy source Axxent operating at 50 kVp. A research version of the inverse planning by simulated annealing algorithm was combined with MC to provide a method to fully account for the heterogeneities in dose optimization, using the MC method. The effect of the water approximation was found to depend on photon energy, with greater dose attenuation for the lower energies of the Axxent source compared with iridium-192. For the latter, an underdosage of 5.1% for the dose received by 90% of the clinical target volume was found. A new method to optimize afterloading brachytherapy plans that uses MC dosimetric information was developed. Including computed tomography-based information in MC dosimetry in the inverse planning process was shown to take into account the full range of scatter and heterogeneity conditions. This led to significant dose differences compared with the Task Group report 43 approach for the Axxent source. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Tumour control probability derived from dose distribution in homogeneous and heterogeneous models: assuming similar pharmacokinetics, 125Sn-177Lu is superior to 90Y-177Lu in peptide receptor radiotherapy

    NASA Astrophysics Data System (ADS)

    Walrand, Stephan; Hanin, François-Xavier; Pauwels, Stanislas; Jamar, François

    2012-07-01

    Clinical trials on 177Lu-90Y therapy used empirical activity ratios. Radionuclides (RN) with larger beta maximal range could favourably replace 90Y. Our aim is to provide RN dose-deposition kernels and to compare the tumour control probability (TCP) of RN combinations. Dose kernels were derived by integration of the mono-energetic beta-ray dose distributions (computed using Monte Carlo) weighted by their respective beta spectrum. Nine homogeneous spherical tumours (1-25 mm in diameter) and four spherical tumours including a lattice of cold, but alive, spheres (1, 3, 5, 7 mm in diameter) were modelled. The TCP for 93Y, 90Y and 125Sn in combination with 177Lu in variable proportions (that kept constant the renal cortex biological effective dose) were derived by 3D dose kernel convolution. For a mean tumour-absorbed dose of 180 Gy, 2 mm homogeneous tumours and tumours including 3 mm diameter cold alive spheres were both well controlled (TCP > 0.9) using a 75-25% combination of 177Lu and 90Y activity. However, 125Sn-177Lu achieved a significantly better result by controlling 1 mm-homogeneous tumour simultaneously with tumours including 5 mm diameter cold alive spheres. Clinical trials using RN combinations should use RN proportions tuned to the patient dosimetry. 125Sn production and its coupling to somatostatin analogue appear feasible. Assuming similar pharmacokinetics 125Sn is the best RN for combination with 177Lu in peptide receptor radiotherapy justifying pharmacokinetics studies in rodent of 125Sn-labelled somatostatin analogues.

  10. SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jingqian, W; Wang, Q; Zhang, X

    2015-06-15

    Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogrammore » (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems.« less

  11. Travel-time sensitivity kernels in long-range propagation.

    PubMed

    Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A

    2009-11-01

    Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.

  12. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  13. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  14. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  15. Quantifying the sensitivity of post-glacial sea level change to laterally varying viscosity

    NASA Astrophysics Data System (ADS)

    Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.; Austermann, Jacqueline; Lau, Harriet C. P.

    2018-05-01

    We present a method for calculating the derivatives of measurements of glacial isostatic adjustment (GIA) with respect to the viscosity structure of the Earth and the ice sheet history. These derivatives, or kernels, quantify the linearised sensitivity of measurements to the underlying model parameters. The adjoint method is used to enable efficient calculation of theoretically exact sensitivity kernels within laterally heterogeneous earth models that can have a range of linear or non-linear viscoelastic rheologies. We first present a new approach to calculate GIA in the time domain, which, in contrast to the more usual formulation in the Laplace domain, is well suited to continuously varying earth models and to the use of the adjoint method. Benchmarking results show excellent agreement between our formulation and previous methods. We illustrate the potential applications of the kernels calculated in this way through a range of numerical calculations relative to a spherically symmetric background model. The complex spatial patterns of the sensitivities are not intuitive, and this is the first time that such effects are quantified in an efficient and accurate manner.

  16. Metabolites Identified during Varied Doses of Aspergillus Species in Zea mays Grains, and Their Correlation with Aflatoxin Levels

    PubMed Central

    Chrysanthopoulos, Panagiotis K.; Hodson, Mark P.; Darnell, Ross; Korie, Sam

    2018-01-01

    Aflatoxin contamination is associated with the development of aflatoxigenic fungi such as Aspergillus flavus and A. parasiticus on food grains. This study was aimed at investigating metabolites produced during fungal development on maize and their correlation with aflatoxin levels. Maize cobs were harvested at R3 (milk), R4 (dough), and R5 (dent) stages of maturity. Individual kernels were inoculated in petri dishes with four doses of fungal spores. Fungal colonisation, metabolite profile, and aflatoxin levels were examined. Grain colonisation decreased with kernel maturity: milk-, dough-, and dent-stage kernels by approximately 100%, 60%, and 30% respectively. Aflatoxin levels increased with dose at dough and dent stages. Polar metabolites including alanine, proline, serine, valine, inositol, iso-leucine, sucrose, fructose, trehalose, turanose, mannitol, glycerol, arabitol, inositol, myo-inositol, and some intermediates of the tricarboxylic acid cycle (TCA—also known as citric acid or Krebs cycle) were important for dose classification. Important non-polar metabolites included arachidic, palmitic, stearic, 3,4-xylylic, and margaric acids. Aflatoxin levels correlated with levels of several polar metabolites. The strongest positive and negative correlations were with arabitol (R = 0.48) and turanose and (R = −0.53), respectively. Several metabolites were interconnected with the TCA; interconnections of the metabolites with the TCA cycle varied depending upon the grain maturity. PMID:29735944

  17. Metabolites Identified during Varied Doses of Aspergillus Species in Zea mays Grains, and Their Correlation with Aflatoxin Levels.

    PubMed

    Falade, Titilayo D O; Chrysanthopoulos, Panagiotis K; Hodson, Mark P; Sultanbawa, Yasmina; Fletcher, Mary; Darnell, Ross; Korie, Sam; Fox, Glen

    2018-05-07

    Aflatoxin contamination is associated with the development of aflatoxigenic fungi such as Aspergillus flavus and A. parasiticus on food grains. This study was aimed at investigating metabolites produced during fungal development on maize and their correlation with aflatoxin levels. Maize cobs were harvested at R3 (milk), R4 (dough), and R5 (dent) stages of maturity. Individual kernels were inoculated in petri dishes with four doses of fungal spores. Fungal colonisation, metabolite profile, and aflatoxin levels were examined. Grain colonisation decreased with kernel maturity: milk-, dough-, and dent-stage kernels by approximately 100%, 60%, and 30% respectively. Aflatoxin levels increased with dose at dough and dent stages. Polar metabolites including alanine, proline, serine, valine, inositol, iso-leucine, sucrose, fructose, trehalose, turanose, mannitol, glycerol, arabitol, inositol, myo-inositol, and some intermediates of the tricarboxylic acid cycle (TCA—also known as citric acid or Krebs cycle) were important for dose classification. Important non-polar metabolites included arachidic, palmitic, stearic, 3,4-xylylic, and margaric acids. Aflatoxin levels correlated with levels of several polar metabolites. The strongest positive and negative correlations were with arabitol ( R = 0.48) and turanose and ( R = −0.53), respectively. Several metabolites were interconnected with the TCA; interconnections of the metabolites with the TCA cycle varied depending upon the grain maturity.

  18. Validity of linear measurements of the jaws using ultralow-dose MDCT and the iterative techniques of ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig

    2016-10-01

    To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.

  19. In vitro evaluation of a new iterative reconstruction algorithm for dose reduction in coronary artery calcium scoring

    PubMed Central

    Allmendinger, Thomas; Kunz, Andreas S; Veyhl-Wichmann, Maike; Ergün, Süleyman; Bley, Thorsten A; Petritsch, Bernhard

    2017-01-01

    Background Coronary artery calcium (CAC) scoring is a widespread tool for cardiac risk assessment in asymptomatic patients and accompanying possible adverse effects, i.e. radiation exposure, should be as low as reasonably achievable. Purpose To evaluate a new iterative reconstruction (IR) algorithm for dose reduction of in vitro coronary artery calcium scoring at different tube currents. Material and Methods An anthropomorphic calcium scoring phantom was scanned in different configurations simulating slim, average-sized, and large patients. A standard calcium scoring protocol was performed on a third-generation dual-source CT at 120 kVp tube voltage. Reference tube current was 80 mAs as standard and stepwise reduced to 60, 40, 20, and 10 mAs. Images were reconstructed with weighted filtered back projection (wFBP) and a new version of an established IR kernel at different strength levels. Calcifications were quantified calculating Agatston and volume scores. Subjective image quality was visualized with scans of an ex vivo human heart. Results In general, Agatston and volume scores remained relatively stable between 80 and 40 mAs and increased at lower tube currents, particularly in the medium and large phantom. IR reduced this effect, as both Agatston and volume scores decreased with increasing levels of IR compared to wFBP (P < 0.001). Depending on selected parameters, radiation dose could be lowered by up to 86% in the large size phantom when selecting a reference tube current of 10 mAs with resulting Agatston levels close to the reference settings. Conclusion New iterative reconstruction kernels may allow for reduction in tube current for established Agatston scoring protocols and consequently for substantial reduction in radiation exposure. PMID:28607763

  20. Improvements to the kernel function method of steady, subsonic lifting surface theory

    NASA Technical Reports Server (NTRS)

    Medan, R. T.

    1974-01-01

    The application of a kernel function lifting surface method to three dimensional, thin wing theory is discussed. A technique for determining the influence functions is presented. The technique is shown to require fewer quadrature points, while still calculating the influence functions accurately enough to guarantee convergence with an increasing number of spanwise quadrature points. The method also treats control points on the wing leading and trailing edges. The report introduces and employs an aspect of the kernel function method which apparently has never been used before and which significantly enhances the efficiency of the kernel function approach.

  1. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  2. Initial Simulations of RF Waves in Hot Plasmas Using the FullWave Code

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2017-10-01

    FullWave is a simulation tool that models RF fields in hot inhomogeneous magnetized plasmas. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. In an rf field, the hot plasma dielectric response is limited to the distance of a few particles' Larmor radii, near the magnetic field line passing through the test point. The localization of the hot plasma dielectric response results in a sparse matrix of the problem thus significantly reduces the size of the problem and makes the simulations faster. We will present the initial results of modeling of rf waves using the Fullwave code, including calculation of nonlocal conductivity kernel in 2D Tokamak geometry; the interpolation of conductivity kernel from test points to adaptive cloud of computational points; and the results of self-consistent simulations of 2D rf fields using calculated hot plasma conductivity kernel in a tokamak plasma with reduced parameters. Work supported by the US DOE ``SBIR program.

  3. Regional teleseismic body-wave tomography with component-differential finite-frequency sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Shen, Y.; Chen, Y. J.

    2015-12-01

    By using ray theory in conjunction with the Born approximation, Dahlen et al. [2000] computed 3-D sensitivity kernels for finite-frequency seismic traveltimes. A series of studies have been conducted based on this theory to model the mantle velocity structure [e.g., Hung et al., 2004; Montelli et al., 2004; Ren and Shen, 2008; Yang et al., 2009; Liang et al., 2011; Tang et al., 2014]. One of the simplifications in the calculation of the kernels is the paraxial assumption, which may not be strictly valid near the receiver, the region of interest in regional teleseismic tomography. In this study, we improve the accuracy of traveltime sensitivity kernels of the first P arrival by eliminating the paraxial approximation. For calculation efficiency, the traveltime table built by the Fast Marching Method (FMM) is used to calculate both the wave vector and the geometrical spreading at every grid in the whole volume. The improved kernels maintain the sign, but with different amplitudes at different locations. We also find that when the directivity of the scattered wave is being taken into consideration, the differential sensitivity kernel of traveltimes measured at the vertical and radial component of the same receiver concentrates beneath the receiver, which can be used to invert for the structure inside the Earth. Compared with conventional teleseismic tomography, which uses the differential traveltimes between two stations in an array, this method is not affected by instrument response and timing errors, and reduces the uncertainty caused by the finite dimension of the model in regional tomography. In addition, the cross-dependence of P traveltimes to S-wave velocity anomaly is significant and sensitive to the structure beneath the receiver. So with the component-differential finite-frequency sensitivity kernel, the anomaly of both P-wave and S-wave velocity and Vp/Vs ratio can be achieved at the same time.

  4. Different kernel functions due to rainfall response from borehole strainmeter in Taiwan

    NASA Astrophysics Data System (ADS)

    Yen Chen, Chih; Hu, Jyr Ching; LIu, Chi Ching

    2014-05-01

    In order to realize reasons inducing earthquakes, project of monitoring of the fault activity using 3-component Gladwin Tensor Strainmeter (GTSM) has been initiated since 2003 in Taiwan, which is one of the most active seismic regions in the world. Observed strain contains several different effects within including barometric, tidal, groundwater, precipitation, tectonics, seismic and other irregular noise. After removing the response of tides and air pressure on strain, we still can find some anomalies highly related to the rainfall in short time in days. The strain response induced by rainfall can be separated into two parts as observation in groundwater, slow response and quick response, respectively. Quick response reflects the strain responding to the load of falling water drops on the ground surface. A kernel function shows the continual response induced by unit precipitation water in time domain. We split the quick response from data removing tidal and barometric response, and then calculate the kernel function by use of deconvolution method. More, an average kernel function was calculated to reduce the noise level. There are five of the sites installed by CGS Taiwan were selected to calculate kernel functions for individual sites. The results show there may be different on rainfall response in different environmental settings. In the case of stations site on gentle terrain, kernel function for each site shows the similar trend, it rises quickly to maximum in 1 to 2 hrs, and then goes down near to zero gently in period of 2-3 days. But in the case of sites settled side by the rivers, there will be 2nd peak of function when collected water in the catchment flows along by the sites related to the hydrograph of creeks. More, landslides will occur in some sites in hazard of landslide with more rainfall stored on, just like DARB in ChiaYi. The curve of kernel function will be controlled by landslides and debris flows.

  5. Automatically-generated rectal dose constraints in intensity-modulated radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Hwang, Taejin; Kim, Yong Nam; Kim, Soo Kon; Kang, Sei-Kwon; Cheong, Kwang-Ho; Park, Soah; Yoon, Jai-Woong; Han, Taejin; Kim, Haeyoung; Lee, Meyeon; Kim, Kyoung-Joo; Bae, Hoonsik; Suh, Tae-Suk

    2015-06-01

    The dose constraint during prostate intensity-modulated radiation therapy (IMRT) optimization should be patient-specific for better rectum sparing. The aims of this study are to suggest a novel method for automatically generating a patient-specific dose constraint by using an experience-based dose volume histogram (DVH) of the rectum and to evaluate the potential of such a dose constraint qualitatively. The normal tissue complication probabilities (NTCPs) of the rectum with respect to V %ratio in our study were divided into three groups, where V %ratio was defined as the percent ratio of the rectal volume overlapping the planning target volume (PTV) to the rectal volume: (1) the rectal NTCPs in the previous study (clinical data), (2) those statistically generated by using the standard normal distribution (calculated data), and (3) those generated by combining the calculated data and the clinical data (mixed data). In the calculated data, a random number whose mean value was on the fitted curve described in the clinical data and whose standard deviation was 1% was generated by using the `randn' function in the MATLAB program and was used. For each group, we validated whether the probability density function (PDF) of the rectal NTCP could be automatically generated with the density estimation method by using a Gaussian kernel. The results revealed that the rectal NTCP probability increased in proportion to V %ratio , that the predictive rectal NTCP was patient-specific, and that the starting point of IMRT optimization for the given patient might be different. The PDF of the rectal NTCP was obtained automatically for each group except that the smoothness of the probability distribution increased with increasing number of data and with increasing window width. We showed that during the prostate IMRT optimization, the patient-specific dose constraints could be automatically generated and that our method could reduce the IMRT optimization time as well as maintain the IMRT plan quality.

  6. Integrating semantic information into multiple kernels for protein-protein interaction extraction from biomedical literatures.

    PubMed

    Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen

    2014-01-01

    Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information.

  7. Novel characterization method of impedance cardiography signals using time-frequency distributions.

    PubMed

    Escrivá Muñoz, Jesús; Pan, Y; Ge, S; Jensen, E W; Vallverdú, M

    2018-03-16

    The purpose of this document is to describe a methodology to select the most adequate time-frequency distribution (TFD) kernel for the characterization of impedance cardiography signals (ICG). The predominant ICG beat was extracted from a patient and was synthetized using time-frequency variant Fourier approximations. These synthetized signals were used to optimize several TFD kernels according to a performance maximization. The optimized kernels were tested for noise resistance on a clinical database. The resulting optimized TFD kernels are presented with their performance calculated using newly proposed methods. The procedure explained in this work showcases a new method to select an appropriate kernel for ICG signals and compares the performance of different time-frequency kernels found in the literature for the case of ICG signals. We conclude that, for ICG signals, the performance (P) of the spectrogram with either Hanning or Hamming windows (P = 0.780) and the extended modified beta distribution (P = 0.765) provided similar results, higher than the rest of analyzed kernels. Graphical abstract Flowchart for the optimization of time-frequency distribution kernels for impedance cardiography signals.

  8. Developments of Finite-Frequency Seismic Theory and Applications to Regional Tomographic Imaging

    DTIC Science & Technology

    2009-01-31

    banana -doughnut” sensitivity kernels of teleseismic body waves to image the crust and mantle beneath eastern Eurasia. We have collected and processed...In this project, we use the “ banana -doughnut” sensitivity kernels of teleseismic body waves to image the crust and mantle beneath eastern Eurasia...replaced body-wave ray paths with “ banana -doughnut” sensitivity kernels calculated in 1D (Dahlen et al., 2000; Hung et al., 2000; Zhao et al., 2000

  9. Design and dosimetry of a few leaf electron collimator for energy modulated electron therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Yahya, Khalid; Verhaegen, Frank; Seuntjens, Jan

    2007-12-15

    Despite the capability of energy modulated electron therapy (EMET) to achieve highly conformal dose distributions in superficial targets it has not been widely implemented due to problems inherent in electron beam radiotherapy such as planning dosimetry accuracy, and verification as well as a lack of systems for automated delivery. In previous work we proposed a novel technique to deliver EMET using an automated 'few leaf electron collimator' (FLEC) that consists of four motor-driven leaves fit in a standard clinical electron beam applicator. Integrated with a Monte Carlo based optimization algorithm that utilizes patient-specific dose kernels, a treatment delivery was incorporatedmore » within the linear accelerator operation. The FLEC was envisioned to work as an accessory tool added to the clinical accelerator. In this article the design and construction of the FLEC prototype that match our compact design goals are presented. It is controlled using an in-house developed EMET controller. The structure of the software and the hardware characteristics of the EMET controller are demonstrated. Using a parallel plate ionization chamber, output measurements were obtained to validate the Monte Carlo calculations for a range of fields with different energies and sizes. Further verifications were also performed for comparing 1-D and 2-D dose distributions using energy independent radiochromic films. Comparisons between Monte Carlo calculations and measurements of complex intensity map deliveries show an overall agreement to within {+-}3%. This work confirms our design objectives of the FLEC that allow for automated delivery of EMET. Furthermore, the Monte Carlo dose calculation engine required for EMET planning was validated. The result supports the potential of the prototype FLEC for the planning and delivery of EMET.« less

  10. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  11. Lung nodule detection by microdose CT versus chest radiography (standard and dual-energy subtracted).

    PubMed

    Ebner, Lukas; Bütikofer, Yanik; Ott, Daniel; Huber, Adrian; Landau, Julia; Roos, Justus E; Heverhagen, Johannes T; Christe, Andreas

    2015-04-01

    The purpose of this study was to investigate the feasibility of microdose CT using a comparable dose as for conventional chest radiographs in two planes including dual-energy subtraction for lung nodule assessment. We investigated 65 chest phantoms with 141 lung nodules, using an anthropomorphic chest phantom with artificial lung nodules. Microdose CT parameters were 80 kV and 6 mAs, with pitch of 2.2. Iterative reconstruction algorithms and an integrated circuit detector system (Stellar, Siemens Healthcare) were applied for maximum dose reduction. Maximum intensity projections (MIPs) were reconstructed. Chest radiographs were acquired in two projections with bone suppression. Four blinded radiologists interpreted the images in random order. A soft-tissue CT kernel (I30f) delivered better sensitivities in a pilot study than a hard kernel (I70f), with respective mean (SD) sensitivities of 91.1%±2.2% versus 85.6%±5.6% (p=0.041). Nodule size was measured accurately for all kernels. Mean clustered nodule sensitivity with chest radiography was 45.7%±8.1% (with bone suppression, 46.1%±8%; p=0.94); for microdose CT, nodule sensitivity was 83.6%±9% without MIP (with additional MIP, 92.5%±6%; p<10(-3)). Individual sensitivities of microdose CT for readers 1, 2, 3, and 4 were 84.3%, 90.7%, 68.6%, and 45.0%, respectively. Sensitivities with chest radiography for readers 1, 2, 3, and 4 were 42.9%, 58.6%, 36.4%, and 90.7%, respectively. In the per-phantom analysis, respective sensitivities of microdose CT versus chest radiography were 96.2% and 75% (p<10(-6)). The effective dose for chest radiography including dual-energy subtraction was 0.242 mSv; for microdose CT, the applied dose was 0.1323 mSv. Microdose CT is better than the combination of chest radiography and dual-energy subtraction for the detection of solid nodules between 5 and 12 mm at a lower dose level of 0.13 mSv. Soft-tissue kernels allow better sensitivities. These preliminary results indicate that microdose CT has the potential to replace conventional chest radiography for lung nodule detection.

  12. Heavy and Heavy-Light Mesons in the Covariant Spectator Theory

    NASA Astrophysics Data System (ADS)

    Stadler, Alfred; Leitão, Sofia; Peña, M. T.; Biernat, Elmar P.

    2018-05-01

    The masses and vertex functions of heavy and heavy-light mesons, described as quark-antiquark bound states, are calculated with the Covariant Spectator Theory (CST). We use a kernel with an adjustable mixture of Lorentz scalar, pseudoscalar, and vector linear confining interaction, together with a one-gluon-exchange kernel. A series of fits to the heavy and heavy-light meson spectrum were calculated, and we discuss what conclusions can be drawn from it, especially about the Lorentz structure of the kernel. We also apply the Brodsky-Huang-Lepage prescription to express the CST wave functions for heavy quarkonia in terms of light-front variables. They agree remarkably well with light-front wave functions obtained in the Hamiltonian basis light-front quantization approach, even in excited states.

  13. Characterization of a maximum-likelihood nonparametric density estimator of kernel type

    NASA Technical Reports Server (NTRS)

    Geman, S.; Mcclure, D. E.

    1982-01-01

    Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

  14. Effect of Acrocomia aculeata Kernel Oil on Adiposity in Type 2 Diabetic Rats.

    PubMed

    Nunes, Ângela A; Buccini, Danieli F; Jaques, Jeandre A S; Portugal, Luciane C; Guimarães, Rita C A; Favaro, Simone P; Caldas, Ruy A; Carvalho, Cristiano M E

    2018-03-01

    The macauba palm (Acrocomia aculeata) is native of tropical America and is found mostly in the Cerrados and Pantanal biomes. The fruits provide an oily pulp, rich in long chain fatty acids, and a kernel that encompass more than 50% of lipids rich in medium chain fatty acids (MCFA). Based on biochemical and nutritional evidences MCFA is readily catabolized and can reduce body fat accumulation. In this study, an animal model was employed to evaluate the effect of Acrocomia aculeata kernel oil (AKO) on the blood glucose level and the fatty acid deposit in the epididymal adipose tissue. The A. aculeata kernel oil obtained by cold pressing presented suitable quality as edible oil. Its fatty acid profile indicates high concentration of MCFA, mainly lauric, capric and caprilic. Type 2 diabetic rats fed with that kernel oil showed reduction of blood glucose level in comparison with the diabetic control group. Acrocomia aculeata kernel oil showed hypoglycemic effect. A small fraction of total dietary medium chain fatty acid was accumulated in the epididymal adipose tissue of rats fed with AKO at both low and high doses and caprilic acid did not deposit at all.

  15. Modeling RF Fields in Hot Plasmas with Parallel Full Wave Code

    NASA Astrophysics Data System (ADS)

    Spencer, Andrew; Svidzinski, Vladimir; Zhao, Liangji; Galkin, Sergei; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a suite of full wave RF plasma codes. It is based on a meshless formulation in configuration space with adapted cloud of computational points (CCP) capability and using the hot plasma conductivity kernel to model the nonlocal plasma dielectric response. The conductivity kernel is calculated by numerically integrating the linearized Vlasov equation along unperturbed particle trajectories. Work has been done on the following calculations: 1) the conductivity kernel in hot plasmas, 2) a monitor function based on analytic solutions of the cold-plasma dispersion relation, 3) an adaptive CCP based on the monitor function, 4) stencils to approximate the wave equations on the CCP, 5) the solution to the full wave equations in the cold-plasma model in tokamak geometry for ECRH and ICRH range of frequencies, and 6) the solution to the wave equations using the calculated hot plasma conductivity kernel. We will present results on using a meshless formulation on adaptive CCP to solve the wave equations and on implementing the non-local hot plasma dielectric response to the wave equations. The presentation will include numerical results of wave propagation and absorption in the cold and hot tokamak plasma RF models, using DIII-D geometry and plasma parameters. Work is supported by the U.S. DOE SBIR program.

  16. Analysis of the power flow in nonlinear oscillators driven by random excitation using the first Wiener kernel

    NASA Astrophysics Data System (ADS)

    Hawes, D. H.; Langley, R. S.

    2018-01-01

    Random excitation of mechanical systems occurs in a wide variety of structures and, in some applications, calculation of the power dissipated by such a system will be of interest. In this paper, using the Wiener series, a general methodology is developed for calculating the power dissipated by a general nonlinear multi-degree-of freedom oscillatory system excited by random Gaussian base motion of any spectrum. The Wiener series method is most commonly applied to systems with white noise inputs, but can be extended to encompass a general non-white input. From the extended series a simple expression for the power dissipated can be derived in terms of the first term, or kernel, of the series and the spectrum of the input. Calculation of the first kernel can be performed either via numerical simulations or from experimental data and a useful property of the kernel, namely that the integral over its frequency domain representation is proportional to the oscillating mass, is derived. The resulting equations offer a simple conceptual analysis of the power flow in nonlinear randomly excited systems and hence assist the design of any system where power dissipation is a consideration. The results are validated both numerically and experimentally using a base-excited cantilever beam with a nonlinear restoring force produced by magnets.

  17. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Accuracy and variability of texture-based radiomics features of lung lesions across CT imaging conditions

    NASA Astrophysics Data System (ADS)

    Zheng, Yuese; Solomon, Justin; Choudhury, Kingshuk; Marin, Daniele; Samei, Ehsan

    2017-03-01

    Texture analysis for lung lesions is sensitive to changing imaging conditions but these effects are not well understood, in part, due to a lack of ground-truth phantoms with realistic textures. The purpose of this study was to explore the accuracy and variability of texture features across imaging conditions by comparing imaged texture features to voxel-based 3D printed textured lesions for which the true values are known. The seven features of interest were based on the Grey Level Co-Occurrence Matrix (GLCM). The lesion phantoms were designed with three shapes (spherical, lobulated, and spiculated), two textures (homogenous and heterogeneous), and two sizes (diameter < 1.5 cm and 1.5 cm < diameter < 3 cm), resulting in 24 lesions (with a second replica of each). The lesions were inserted into an anthropomorphic thorax phantom (Multipurpose Chest Phantom N1, Kyoto Kagaku) and imaged using a commercial CT system (GE Revolution) at three CTDI levels (0.67, 1.42, and 5.80 mGy), three reconstruction algorithms (FBP, IR-2, IR-4), four reconstruction kernel types (standard, soft, edge), and two slice thicknesses (0.6 mm and 5 mm). Another repeat scan was performed. Texture features from these images were extracted and compared to the ground truth feature values by percent relative error. The variability across imaging conditions was calculated by standard deviation across a certain imaging condition for all heterogeneous lesions. The results indicated that the acquisition method has a significant influence on the accuracy and variability of extracted features and as such, feature quantities are highly susceptible to imaging parameter choices. The most influential parameters were slice thickness and reconstruction kernels. Thin slice thickness and edge reconstruction kernel overall produced more accurate and more repeatable results. Some features (e.g., Contrast) were more accurately quantified under conditions that render higher spatial frequencies (e.g., thinner slice thickness and sharp kernels), while others (e.g., Homogeneity) showed more accurate quantification under conditions that render smoother images (e.g., higher dose and smoother kernels). Care should be exercised is relating texture features between cases of varied acquisition protocols, with need to cross calibration dependent on the feature of interest.

  19. A stochastic convolution/superposition method with isocenter sampling to evaluate intrafraction motion effects in IMRT.

    PubMed

    Naqvi, Shahid A; D'Souza, Warren D

    2005-04-01

    Current methods to calculate dose distributions with organ motion can be broadly classified as "dose convolution" and "fluence convolution" methods. In the former, a static dose distribution is convolved with the probability distribution function (PDF) that characterizes the motion. However, artifacts are produced near the surface and around inhomogeneities because the method assumes shift invariance. Fluence convolution avoids these artifacts by convolving the PDF with the incident fluence instead of the patient dose. In this paper we present an alternative method that improves the accuracy, generality as well as the speed of dose calculation with organ motion. The algorithm starts by sampling an isocenter point from a parametrically defined space curve corresponding to the patient-specific motion trajectory. Then a photon is sampled in the linac head and propagated through the three-dimensional (3-D) collimator structure corresponding to a particular MLC segment chosen randomly from the planned IMRT leaf sequence. The photon is then made to interact at a point in the CT-based simulation phantom. Randomly sampled monoenergetic kernel rays issued from this point are then made to deposit energy in the voxels. Our method explicitly accounts for MLC-specific effects (spectral hardening, tongue-and-groove, head scatter) as well as changes in SSD with isocentric displacement, assuming that the body moves rigidly with the isocenter. Since the positions are randomly sampled from a continuum, there is no motion discretization, and the computation takes no more time than a static calculation. To validate our method, we obtained ten separate film measurements of an IMRT plan delivered on a phantom moving sinusoidally, with each fraction starting with a random phase. For 2 cm motion amplitude, we found that a ten-fraction average of the film measurements gave an agreement with the calculated infinite fraction average to within 2 mm in the isodose curves. The results also corroborate the existing notion that the interfraction dose variability due to the interplay between the MLC motion and breathing motion averages out over typical multifraction treatments. Simulation with motion waveforms more representative of real breathing indicate that the motion can produce penumbral spreading asymmetric about the static dose distributions. Such calculations can help a clinician decide to use, for example, a larger margin in the superior direction than in the inferior direction. In the paper we demonstrate that a 15 min run on a single CPU can readily illustrate the effect of a patient-specific breathing waveform, and can guide the physician in making informed decisions about margin expansion and dose escalation.

  20. gkmSVM: an R package for gapped-kmer SVM

    PubMed Central

    Ghandi, Mahmoud; Mohammad-Noori, Morteza; Ghareghani, Narges; Lee, Dongwon; Garraway, Levi; Beer, Michael A.

    2016-01-01

    Summary: We present a new R package for training gapped-kmer SVM classifiers for DNA and protein sequences. We describe an improved algorithm for kernel matrix calculation that speeds run time by about 2 to 5-fold over our original gkmSVM algorithm. This package supports several sequence kernels, including: gkmSVM, kmer-SVM, mismatch kernel and wildcard kernel. Availability and Implementation: gkmSVM package is freely available through the Comprehensive R Archive Network (CRAN), for Linux, Mac OS and Windows platforms. The C ++ implementation is available at www.beerlab.org/gkmsvm Contact: mghandi@gmail.com or mbeer@jhu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153639

  1. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  2. Generalized and efficient algorithm for computing multipole energies and gradients based on Cartesian tensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Dejun, E-mail: dejun.lin@gmail.com

    2015-09-21

    Accurate representation of intermolecular forces has been the central task of classical atomic simulations, known as molecular mechanics. Recent advancements in molecular mechanics models have put forward the explicit representation of permanent and/or induced electric multipole (EMP) moments. The formulas developed so far to calculate EMP interactions tend to have complicated expressions, especially in Cartesian coordinates, which can only be applied to a specific kernel potential function. For example, one needs to develop a new formula each time a new kernel function is encountered. The complication of these formalisms arises from an intriguing and yet obscured mathematical relation between themore » kernel functions and the gradient operators. Here, I uncover this relation via rigorous derivation and find that the formula to calculate EMP interactions is basically invariant to the potential kernel functions as long as they are of the form f(r), i.e., any Green’s function that depends on inter-particle distance. I provide an algorithm for efficient evaluation of EMP interaction energies, forces, and torques for any kernel f(r) up to any arbitrary rank of EMP moments in Cartesian coordinates. The working equations of this algorithm are essentially the same for any kernel f(r). Recently, a few recursive algorithms were proposed to calculate EMP interactions. Depending on the kernel functions, the algorithm here is about 4–16 times faster than these algorithms in terms of the required number of floating point operations and is much more memory efficient. I show that it is even faster than a theoretically ideal recursion scheme, i.e., one that requires 1 floating point multiplication and 1 addition per recursion step. This algorithm has a compact vector-based expression that is optimal for computer programming. The Cartesian nature of this algorithm makes it fit easily into modern molecular simulation packages as compared with spherical coordinate-based algorithms. A software library based on this algorithm has been implemented in C++11 and has been released.« less

  3. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  4. SU-F-T-147: An Alternative Parameterization of Scatter Behavior Allows Significant Reduction of Beam Characterization for Pencil Beam Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van den Heuvel, F; Fiorini, F; George, B

    2016-06-15

    Purpose: 1) To describe the characteristics of pencil beam proton dose deposition kernels in a homogenous medium using a novel parameterization. 2) To propose a method utilizing this novel parametrization to reduce the measurements and pre-computation required in commissioning a pencil beam proton therapy system. Methods: Using beam data from a clinical, pencil beam proton therapy center, Monte Carlo simulations were performed to characterize the dose depositions at a range of energies from 100.32 to 226.08 MeV in 3.6MeV steps. At each energy, the beam is defined at the surface of the phantom by a two-dimensional Normal distribution. Using FLUKA,more » the in-medium dose distribution is calculated in 200×200×350 mm cube with 1 mm{sup 3} tally volumes. The calculated dose distribution in each 200×200 slice perpendicular to the beam axis is then characterized using a symmetric alpha-stable distribution centered on the beam axis. This results in two parameters, α and γ, that completely describe shape of the distribution. In addition, the total dose deposited on each slice is calculated. The alpha-stable parameters are plotted as function of the depth in-medium, providing a representation of dose deposition along the pencil beam. We observed that these graphs are isometric through a scaling of both abscissa and ordinate map the curves. Results: Using interpolation of the scaling factors of two source curves representative of different beam energies, we predicted the parameters of a third curve at an intermediate energy. The errors are quantified by the maximal difference and provide a fit better than previous methods. The maximal energy difference between the source curves generating identical curves was 21.14MeV. Conclusion: We have introduced a novel method to parameterize the in-phantom properties of pencil beam proton dose depositions. For the case of the Knoxville IBA system, no more than nine pencil beams have to be fully characterized.« less

  5. A deterministic partial differential equation model for dose calculation in electron radiotherapy.

    PubMed

    Duclous, R; Dubroca, B; Frank, M

    2010-07-07

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of delta electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  6. A deterministic partial differential equation model for dose calculation in electron radiotherapy

    NASA Astrophysics Data System (ADS)

    Duclous, R.; Dubroca, B.; Frank, M.

    2010-07-01

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of δ electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  7. Effect of mixing scanner types and reconstruction kernels on the characterization of lung parenchymal pathologies: emphysema, interstitial pulmonary fibrosis and normal non-smokers

    NASA Astrophysics Data System (ADS)

    Xu, Ye; van Beek, Edwin J.; McLennan, Geoffrey; Guo, Junfeng; Sonka, Milan; Hoffman, Eric

    2006-03-01

    In this study we utilize our texture characterization software (3-D AMFM) to characterize interstitial lung diseases (including emphysema) based on MDCT generated volumetric data using 3-dimensional texture features. We have sought to test whether the scanner and reconstruction filter (kernel) type affect the classification of lung diseases using the 3-D AMFM. We collected MDCT images in three subject groups: emphysema (n=9), interstitial pulmonary fibrosis (IPF) (n=10), and normal non-smokers (n=9). In each group, images were scanned either on a Siemens Sensation 16 or 64-slice scanner, (B50f or B30 recon. kernel) or a Philips 4-slice scanner (B recon. kernel). A total of 1516 volumes of interest (VOIs; 21x21 pixels in plane) were marked by two chest imaging experts using the Iowa Pulmonary Analysis Software Suite (PASS). We calculated 24 volumetric features. Bayesian methods were used for classification. Images from different scanners/kernels were combined in all possible combinations to test how robust the tissue classification was relative to the differences in image characteristics. We used 10-fold cross validation for testing the result. Sensitivity, specificity and accuracy were calculated. One-way Analysis of Variances (ANOVA) was used to compare the classification result between the various combinations of scanner and reconstruction kernel types. This study yielded a sensitivity of 94%, 91%, 97%, and 93% for emphysema, ground-glass, honeycombing, and normal non-smoker patterns respectively using a mixture of all three subject groups. The specificity for these characterizations was 97%, 99%, 99%, and 98%, respectively. The F test result of ANOVA shows there is no significant difference (p <0.05) between different combinations of data with respect to scanner and convolution kernel type. Since different MDCT and reconstruction kernel types did not show significant differences in regards to the classification result, this study suggests that the 3-D AMFM can be generally introduced.

  8. Development, survival and fitness performance of Helicoverpa zea (Lepidoptera: Noctuidae) in MON810 Bt field corn.

    PubMed

    Horner, T A; Dively, G P; Herbert, D A

    2003-06-01

    Helicoverpa zea (Boddie) development, survival, and feeding injury in MON810 transgenic ears of field corn (Zea mays L.) expressing Bacillus thuringiensis variety kurstaki (Bt) Cry1Ab endotoxins were compared with non-Bt ears at four geographic locations over two growing seasons. Expression of Cry1Ab endotoxin resulted in overall reductions in the percentage of damaged ears by 33% and in the amount of kernels consumed by 60%. Bt-induced effects varied significantly among locations, partly because of the overall level and timing of H. zea infestations, condition of silk tissue at the time of egg hatch, and the possible effects of plant stress. Larvae feeding on Bt ears produced scattered, discontinuous patches of partially consumed kernels, which were arranged more linearly than the compact feeding patterns in non-Bt ears. The feeding patterns suggest that larvae in Bt ears are moving about sampling kernels more frequently than larvae in non-Bt ears. Because not all kernels express the same level of endotoxin, the spatial heterogeneity of toxin distribution within Bt ears may provide an opportunity for development of behavioral responses in H. zea to avoid toxin. MON810 corn suppressed the establishment and development of H. zea to late instars by at least 75%. This level of control is considered a moderate dose, which may increase the risk of resistance development in areas where MON810 corn is widely adopted and H. zea overwinters successfully. Sublethal effects of MON810 corn resulted in prolonged larval and prepupal development, smaller pupae, and reduced fecundity of H. zea. The moderate dose effects and the spatial heterogeneity of toxin distribution among kernels could increase the additive genetic variance for both physiological and behavioral resistance in H. zea populations. Implications of localized population suppression are discussed.

  9. Development of FullWave : Hot Plasma RF Simulation Tool

    NASA Astrophysics Data System (ADS)

    Svidzinski, Vladimir; Kim, Jin-Soo; Spencer, J. Andrew; Zhao, Liangji; Galkin, Sergei

    2017-10-01

    Full wave simulation tool, modeling RF fields in hot inhomogeneous magnetized plasma, is being developed. The wave equations with linearized hot plasma dielectric response are solved in configuration space on adaptive cloud of computational points. The nonlocal hot plasma dielectric response is formulated in configuration space without limiting approximations by calculating the plasma conductivity kernel based on the solution of the linearized Vlasov equation in inhomogeneous magnetic field. This approach allows for better resolution of plasma resonances, antenna structures and complex boundaries. The formulation of FullWave and preliminary results will be presented: construction of the finite differences for approximation of derivatives on adaptive cloud of computational points; model and results of nonlocal conductivity kernel calculation in tokamak geometry; results of 2-D full wave simulations in the cold plasma model in tokamak geometry using the formulated approach; results of self-consistent calculations of hot plasma dielectric response and RF fields in 1-D mirror magnetic field; preliminary results of self-consistent simulations of 2-D RF fields in tokamak using the calculated hot plasma conductivity kernel; development of iterative solver for wave equations. Work is supported by the U.S. DOE SBIR program.

  10. SU-G-206-15: Effects of Dose Reduction On Emphysema Score

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, P; Wahi-Anwar, M; Kim, H

    Purpose: The purpose of this study was to investigate the effects of reducing radiation dose levels on emphysema scores from lung cancer screening CT exams. Methods: 52 cases were selected from the National Lung Screening Trial (NLST) patients for which we had both the image series and the raw CT data. All scans were acquired with fixed effective mAs (25 for standard-sized patients, 40 for large patients) on a 64-slice scanner (Sensation 64, Siemens Healthcare) using 120kV, 64×0.6mm collimation and pitch 1.0. All images were reconstructed with 1mm slice thickness, B50 kernel. Based on a previously-published technique, we added noisemore » to the raw data to simulate reduced-dose versions at 50% and 25% of the original dose (approximately 1.0- and 0.5-mGy CTDIvol). Lung segmentations were obtained via region growing from manual seed point at a threshold of 600HU followed by manual removal of trachea and major airways. Lung segmentations were only performed on original dose scans, and mapped to simulated reduced-dose scans. Emphysema scores based on relative area of lung with attenuation values lower than −950HU (RA950) were computed for all cases. Results: Average RA950 of all 50 cases were 31.6 (±5.5), 32.5 (±4.9) and 32.8 (±4.6) for 100%, 50% and 25% dose level respectively. The average absolute difference in RA950 between simulated and original dose scans were 1.0 (±0.7) and 1.4 (±1.1) for 50% and 25% dose level respectively. Conclusion: RA950 is relatively robust to dose level, with a difference of no more than 5 from the original dose scans. The average RA950 of this population was high for a two reasons: This was a high risk population of patients with substantial smoking history; The use of B50 kernel, which may be biased towards high emphysema scores. Further exploration with smoother kernels will be conducted in the future. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics; NIH grant support from U01 CA181156.« less

  11. Final Aperture Superposition Technique applied to fast calculation of electron output factors and depth dose curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faddegon, B.A.; Villarreal-Barajas, J.E.; Mt. Diablo Regional Cancer Center, 2450 East Street, Concord, California

    2005-11-15

    The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for amore » particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm{sup 2} inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm{sup 3} voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a maximum of 5.6% at 21 MeV. Contributions from the collimator effect were largest for the large field size, high beam energy, and shallow depths, reaching a maximum of 4.7% at 21 MeV. Both shielding contributions and the collimator effect need to be taken into account to achieve an accuracy of 2%. FAST takes explicit account of the shielding contributions. With the collimator effect set to that of the largest field in the FAST calculation, the difference in dose on the central axis (product of ROF and PDD) between FAST and full simulation was generally under 2%. The maximum difference of 2.5% exceeded the statistical precision of the calculation by four standard deviations. This occurred at 18 MeV for the 2.5x2.5 cm{sup 2} field. The differences are due to the method used to account for the collimator effect.« less

  12. A self-calibrated angularly continuous 2D GRAPPA kernel for propeller trajectories

    PubMed Central

    Skare, Stefan; Newbould, Rexford D; Nordell, Anders; Holdsworth, Samantha J; Bammer, Roland

    2008-01-01

    The k-space readout of propeller-type sequences may be accelerated by the use of parallel imaging (PI). For PROPELLER, the main benefits are reduced blurring due to T2 decay and SAR reduction, while for EPI-based propeller acquisitions such as Turbo-PROP and SAP-EPI, the faster k-space traversal alleviates geometric distortions. In this work, the feasibility of calculating a 2D GRAPPA kernel on only the undersampled propeller blades themselves is explored, using the matching orthogonal undersampled blade. It is shown that the GRAPPA kernel varies slowly across blades, therefore an angularly continuous 2D GRAPPA kernel is proposed, in which the angular variation of the weights is parameterized. This new angularly continuous kernel formulation greatly increases the numerical stability of the GRAPPA weight estimation, allowing the generation of fully sampled diagnostic quality images using only the undersampled propeller data. PMID:19025911

  13. Selecting good regions to deblur via relative total variation

    NASA Astrophysics Data System (ADS)

    Li, Lerenhan; Yan, Hao; Fan, Zhihua; Zheng, Hanqing; Gao, Changxin; Sang, Nong

    2018-03-01

    Image deblurring is to estimate the blur kernel and to restore the latent image. It is usually divided into two stage, including kernel estimation and image restoration. In kernel estimation, selecting a good region that contains structure information is helpful to the accuracy of estimated kernel. Good region to deblur is usually expert-chosen or in a trial-anderror way. In this paper, we apply a metric named relative total variation (RTV) to discriminate the structure regions from smooth and texture. Given a blurry image, we first calculate the RTV of each pixel to determine whether it is the pixel in structure region, after which, we sample the image in an overlapping way. At last, the sampled region that contains the most structure pixels is the best region to deblur. Both qualitative and quantitative experiments show that our proposed method can help to estimate the kernel accurately.

  14. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  15. gkmSVM: an R package for gapped-kmer SVM.

    PubMed

    Ghandi, Mahmoud; Mohammad-Noori, Morteza; Ghareghani, Narges; Lee, Dongwon; Garraway, Levi; Beer, Michael A

    2016-07-15

    We present a new R package for training gapped-kmer SVM classifiers for DNA and protein sequences. We describe an improved algorithm for kernel matrix calculation that speeds run time by about 2 to 5-fold over our original gkmSVM algorithm. This package supports several sequence kernels, including: gkmSVM, kmer-SVM, mismatch kernel and wildcard kernel. gkmSVM package is freely available through the Comprehensive R Archive Network (CRAN), for Linux, Mac OS and Windows platforms. The C ++ implementation is available at www.beerlab.org/gkmsvm mghandi@gmail.com or mbeer@jhu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    PubMed

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p < 0.001) when compared with images reconstructed using the bone-sharpening kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p < 0.001, and 18.2%, p < 0.001, respectively) when compared with the image reconstructed by the bone-sharpening kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  17. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  18. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.

  19. Carbothermic synthesis of 820 μm uranium nitride kernels: Literature review, thermodynamics, analysis, and related experiments

    NASA Astrophysics Data System (ADS)

    Lindemer, T. B.; Voit, S. L.; Silva, C. M.; Besmann, T. M.; Hunt, R. D.

    2014-05-01

    The US Department of Energy is developing a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with uranium nitride (UN) kernels with diameters near 825 μm. This effort explores factors involved in the conversion of uranium oxide-carbon microspheres into UN kernels. An analysis of previous studies with sufficient experimental details is provided. Thermodynamic calculations were made to predict pressures of carbon monoxide and other relevant gases for several reactions that can be involved in the conversion of uranium oxides and carbides into UN. Uranium oxide-carbon microspheres were heated in a microbalance with an attached mass spectrometer to determine details of calcining and carbothermic conversion in argon, nitrogen, and vacuum. A model was derived from experiments on the vacuum conversion to uranium oxide-carbide kernels. UN-containing kernels were fabricated using this vacuum conversion as part of the overall process. Carbonitride kernels of ∼89% of theoretical density were produced along with several observations concerning the different stages of the process.

  20. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Bin; Li, Yongbao; Liu, Bo

    Purpose: CyberKnife system is initially equipped with fixed circular cones for stereotactic radiosurgery. Two dose calculation algorithms, Ray-Tracing and Monte Carlo, are available in the supplied treatment planning system. A multileaf collimator system was recently introduced in the latest generation of system, capable of arbitrarily shaped treatment field. The purpose of this study is to develop a model based dose calculation algorithm to better handle the lateral scatter in an irregularly shaped small field for the CyberKnife system. Methods: A pencil beam dose calculation algorithm widely used in linac based treatment planning system was modified. The kernel parameters and intensitymore » profile were systematically determined by fitting to the commissioning data. The model was tuned using only a subset of measured data (4 out of 12 cones) and applied to all fixed circular cones for evaluation. The root mean square (RMS) of the difference between the measured and calculated tissue-phantom-ratios (TPRs) and off-center-ratio (OCR) was compared. Three cone size correction techniques were developed to better fit the OCRs at the penumbra region, which are further evaluated by the output factors (OFs). The pencil beam model was further validated against measurement data on the variable dodecagon-shaped Iris collimators and a half-beam blocked field. Comparison with Ray-Tracing and Monte Carlo methods was also performed on a lung SBRT case. Results: The RMS between the measured and calculated TPRs is 0.7% averaged for all cones, with the descending region at 0.5%. The RMSs of OCR at infield and outfield regions are both at 0.5%. The distance to agreement (DTA) at the OCR penumbra region is 0.2 mm. All three cone size correction models achieve the same improvement in OCR agreement, with the effective source shift model (SSM) preferred, due to their ability to predict more accurately the OF variations with the source to axis distance (SAD). In noncircular field validation, the pencil beam calculated results agreed well with the film measurement of both Iris collimators and the half-beam blocked field, fared much better than the Ray-Tracing calculation. Conclusions: The authors have developed a pencil beam dose calculation model for the CyberKnife system. The dose calculation accuracy is better than the standard linac based system because the model parameters were specifically tuned to the CyberKnife system and geometry correction factors. The model handles better the lateral scatter and has the potential to be used for the irregularly shaped fields. Comprehensive validations on MLC equipped system are necessary for its clinical implementation. It is reasonably fast enough to be used during plan optimization.« less

  2. Improved response functions for gamma-ray skyshine analyses

    NASA Astrophysics Data System (ADS)

    Shultis, J. K.; Faw, R. E.; Deng, X.

    1992-09-01

    A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.

  3. Improved response functions for gamma-ray skyshine analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Deng, X.

    1992-09-01

    A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less

  4. Acute and Subchronic Toxicity of Self Nanoemulsifying Drug Delivery Systems (SNEDDS) from Chloroform Bay Leaf Extract (Eugenia Polyantha W.) with Palm Kernel Oil as A Carrier

    NASA Astrophysics Data System (ADS)

    Prihapsara, F.; Mufidah; Artanti, A. N.; Harini, M.

    2018-03-01

    The present study was aimed to study the acute and subchronic toxicity of Self Nanoemulsifying Drug Delivery Systems (SNEDDS) from chloroform bay leaf extract with Palm Kernel Oil as carrier. In acute toxicity test, five groups of rat (n=5/groups) were orally treated with Self Nanoemulsifying Drug Delivery Systems (SNEDDS) from chloroform bay leaf extract with doses at 48, 240, 1200 and 6000 mg/kg/day respectively, then the median lethal dose LD50, advers effect and mortality were recorded up to 14 days. Meanwhile, in subchronic toxicity study, 4 groups of rats (n=6/group) received by orally treatment of SNEDDS from chloroform bay leaf extract with doses at 91.75; 183.5; 367 mg/kg/day respectively for 28 days, and biochemical, hematological and histopatological change in tissue such as liver, kidney, and pancreatic were determined. The result show that LD50 is 1045.44 mg/kg. Although histopathological examination of most of the organs exhibited no structural changes, some moderate damage was observed in high‑ dose group animals (367 mg/kg/day). The high dose of SNEDDS extract has shown mild signs of toxicity on organ function test.

  5. A fast and objective multidimensional kernel density estimation method: fastKDE

    DOE PAGES

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.; ...

    2016-03-07

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  6. A shortest-path graph kernel for estimating gene product semantic similarity.

    PubMed

    Alvarez, Marco A; Qi, Xiaojun; Yan, Changhui

    2011-07-29

    Existing methods for calculating semantic similarity between gene products using the Gene Ontology (GO) often rely on external resources, which are not part of the ontology. Consequently, changes in these external resources like biased term distribution caused by shifting of hot research topics, will affect the calculation of semantic similarity. One way to avoid this problem is to use semantic methods that are "intrinsic" to the ontology, i.e. independent of external knowledge. We present a shortest-path graph kernel (spgk) method that relies exclusively on the GO and its structure. In spgk, a gene product is represented by an induced subgraph of the GO, which consists of all the GO terms annotating it. Then a shortest-path graph kernel is used to compute the similarity between two graphs. In a comprehensive evaluation using a benchmark dataset, spgk compares favorably with other methods that depend on external resources. Compared with simUI, a method that is also intrinsic to GO, spgk achieves slightly better results on the benchmark dataset. Statistical tests show that the improvement is significant when the resolution and EC similarity correlation coefficient are used to measure the performance, but is insignificant when the Pfam similarity correlation coefficient is used. Spgk uses a graph kernel method in polynomial time to exploit the structure of the GO to calculate semantic similarity between gene products. It provides an alternative to both methods that use external resources and "intrinsic" methods with comparable performance.

  7. MO-E-18C-04: Advanced Computer Simulation and Visualization Tools for Enhanced Understanding of Core Medical Physics Concepts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naqvi, S

    2014-06-15

    Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less

  8. X-ray photoelectron spectroscopic analysis of rice kernels and flours: Measurement of surface chemical composition.

    PubMed

    Nawaz, Malik A; Gaiani, Claire; Fukai, Shu; Bhandari, Bhesh

    2016-12-01

    The objectives of this study were to evaluate the ability of X-ray photoelectron spectroscopy (XPS) to differentiate rice macromolecules and to calculate the surface composition of rice kernels and flours. The uncooked kernels and flours surface composition of the two selected rice varieties, Thadokkham-11 (TDK11) and Doongara (DG) demonstrated an over-expression of lipids and proteins and an under-expression of starch compared to the bulk composition. The results of the study showed that XPS was able to differentiate rice polysaccharides (mainly starch), proteins and lipids in uncooked rice kernels and flours. Nevertheless, it was unable to distinguish components in cooked rice samples possibly due to complex interactions between gelatinized starch, denatured proteins and lipids. High resolution imaging methods (Scanning Electron Microscopy and Confocal Laser Scanning Microscopy) were employed to obtain complementary information about the properties and location of starch, proteins and lipids in rice kernels and flours. Copyright © 2016. Published by Elsevier Ltd.

  9. On the interpretation of kernels - Computer simulation of responses to impulse pairs

    NASA Technical Reports Server (NTRS)

    Hung, G.; Stark, L.; Eykhoff, P.

    1983-01-01

    A method is presented for the use of a unit impulse response and responses to impulse pairs of variable separation in the calculation of the second-degree kernels of a quadratic system. A quadratic system may be built from simple linear terms of known dynamics and a multiplier. Computer simulation results on quadratic systems with building elements of various time constants indicate reasonably that the larger time constant term before multiplication dominates in the envelope of the off-diagonal kernel curves as these move perpendicular to and away from the main diagonal. The smaller time constant term before multiplication combines with the effect of the time constant after multiplication to dominate in the kernel curves in the direction of the second-degree impulse response, i.e., parallel to the main diagonal. Such types of insight may be helpful in recognizing essential aspects of (second-degree) kernels; they may be used in simplifying the model structure and, perhaps, add to the physical/physiological understanding of the underlying processes.

  10. A new discrete dipole kernel for quantitative susceptibility mapping.

    PubMed

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fagerstrom, J; Culberson, W; Bender, E

    2016-06-15

    Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less

  12. Three-Dimensional Sensitivity Kernels of Z/H Amplitude Ratios of Surface and Body Waves

    NASA Astrophysics Data System (ADS)

    Bao, X.; Shen, Y.

    2017-12-01

    The ellipticity of Rayleigh wave particle motion, or Z/H amplitude ratio, has received increasing attention in inversion for shallow Earth structures. Previous studies of the Z/H ratio assumed one-dimensional (1D) velocity structures beneath the receiver, ignoring the effects of three-dimensional (3D) heterogeneities on wave amplitudes. This simplification may introduce bias in the resulting models. Here we present 3D sensitivity kernels of the Z/H ratio to Vs, Vp, and density perturbations, based on finite-difference modeling of wave propagation in 3D structures and the scattering-integral method. Our full-wave approach overcomes two main issues in previous studies of Rayleigh wave ellipticity: (1) the finite-frequency effects of wave propagation in 3D Earth structures, and (2) isolation of the fundamental mode Rayleigh waves from Rayleigh wave overtones and converted Love waves. In contrast to the 1D depth sensitivity kernels in previous studies, our 3D sensitivity kernels exhibit patterns that vary with azimuths and distances to the receiver. The laterally-summed 3D sensitivity kernels and 1D depth sensitivity kernels, based on the same homogeneous reference model, are nearly identical with small differences that are attributable to the single period of the 1D kernels and a finite period range of the 3D kernels. We further verify the 3D sensitivity kernels by comparing the predictions from the kernels with the measurements from numerical simulations of wave propagation for models with various small-scale perturbations. We also calculate and verify the amplitude kernels for P waves. This study shows that both Rayleigh and body wave Z/H ratios provide vertical and lateral constraints on the structure near the receiver. With seismic arrays, the 3D kernels afford a powerful tool to use the Z/H ratios to obtain accurate and high-resolution Earth models.

  13. Optimal focal-plane restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1989-01-01

    Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.

  14. A fast GPU-based Monte Carlo simulation of proton transport with detailed modeling of nonelastic interactions.

    PubMed

    Wan Chan Tseung, H; Ma, J; Beltran, C

    2015-06-01

    Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton-nucleus collisions. Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with (GEANT)4.9.6p2/TOPAS. Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the (GEANT)4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is ∼ 20 s for 1 × 10(7) proton histories. Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with (GEANT)4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically applicable MC-based IMPT treatment planning system. The detailed nuclear modeling will allow us to perform very fast linear energy transfer and neutron dose estimates on the GPU.

  15. Measurement of the rates of oxindole-3-acetic acid turnover, and indole-3-acetic acid oxidation in Zea mays seedlings

    NASA Technical Reports Server (NTRS)

    Nonhebel, H. M.; Bandurski, R. S. (Principal Investigator)

    1986-01-01

    Oxindole-3-acetic acid is the principal catabolite of indole-3-acetic acid in Zea mays seedlings. In this paper measurements of the turnover of oxindole-3-acetic acid are presented and used to calculate the rate of indole-3-acetic acid oxidation. [3H]Oxindole-3-acetic acid was applied to the endosperm of Zea mays seedlings and allowed to equilibrate for 24 h before the start of the experiment. The subsequent decrease in its specific activity was used to calculate the turnover rate. The average half-life of oxindole-3-acetic acid in the shoots was found to be 30 h while that in the kernels had an average half-life of 35h. Using previously published values of the pool sizes of oxindole-3-acetic acid in shoots and kernels from seedlings of the same age and variety, and grown under the same conditions, the rate of indole-3-acetic acid oxidation was calculated to be 1.1 pmol plant-1 h-1 in the shoots and 7.1 pmol plant-1 h-1 in the kernels.

  16. 2D convolution kernels of ionization chambers used for photon-beam dosimetry in magnetic fields: the advantage of small over large chamber dimensions

    NASA Astrophysics Data System (ADS)

    Khee Looe, Hui; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2018-04-01

    This study aims at developing an optimization strategy for photon-beam dosimetry in magnetic fields using ionization chambers. Similar to the familiar case in the absence of a magnetic field, detectors should be selected under the criterion that their measured 2D signal profiles M(x,y) approximate the absorbed dose to water profiles D(x,y) as closely as possible. Since the conversion of D(x,y) into M(x,y) is known as the convolution with the ‘lateral dose response function’ K(x-ξ, y-η) of the detector, the ideal detector would be characterized by a vanishing magnetic field dependence of this convolution kernel (Looe et al 2017b Phys. Med. Biol. 62 5131–48). The idea of the present study is to find out, by Monte Carlo simulation of two commercial ionization chambers of different size, whether the smaller chamber dimensions would be instrumental to approach this aim. As typical examples, the lateral dose response functions in the presence and absence of a magnetic field have been Monte-Carlo modeled for the new commercial ionization chambers PTW 31021 (‘Semiflex 3D’, internal radius 2.4 mm) and PTW 31022 (‘PinPoint 3D’, internal radius 1.45 mm), which are both available with calibration factors. The Monte-Carlo model of the ionization chambers has been adjusted to account for the presence of the non-collecting part of the air volume near the guard ring. The Monte-Carlo results allow a comparison between the widths of the magnetic field dependent photon fluence response function K M(x-ξ, y-η) and of the lateral dose response function K(x-ξ, y-η) of the two chambers with the width of the dose deposition kernel K D(x-ξ, y-η). The simulated dose and chamber signal profiles show that in small photon fields and in the presence of a 1.5 T field the distortion of the chamber signal profile compared with the true dose profile is weakest for the smaller chamber. The dose responses of both chambers at large field size are shown to be altered by not more than 2% in magnetic fields up to 1.5 T for all three investigated chamber orientations.

  17. Determination of aflatoxin risk components for in-shell Brazil nuts.

    PubMed

    Vargas, E A; dos Santos, E A; Whitaker, T B; Slate, A B

    2011-09-01

    A study was conducted on the risk from aflatoxins associated with the kernels and shells of Brazil nuts. Samples were collected from processing plants in Amazonia, Brazil. A total of 54 test samples (40 kg) were taken from 13 in-shell Brazil nut lots ready for market. Each in-shell sample was shelled and the kernels and shells were sorted in five fractions: good kernels, rotten kernels, good shells with kernel residue, good shells without kernel residue, and rotten shells, and analysed for aflatoxins. The kernel:shell ratio mass (w/w) was 50.2/49.8%. The Brazil nut shell was found to be contaminated with aflatoxin. Rotten nuts were found to be a high-risk fraction for aflatoxin in in-shell Brazil nut lots. Rotten nuts contributed only 4.2% of the sample mass (kg), but contributed 76.6% of the total aflatoxin mass (µg) in the in-shell test sample. The highest correlations were found between the aflatoxin concentration in in-shell Brazil nuts samples and the aflatoxin concentration in all defective fractions (R(2)=0.97). The aflatoxin mass of all defective fractions (R(2)=0.90) as well as that of the rotten nut (R(2)=0.88) were also strongly correlated with the aflatoxin concentration of the in-shell test samples. Process factors of 0.17, 0.16 and 0.24 were respectively calculated to estimate the aflatoxin concentration in the good kernels (edible) and good nuts by measuring the aflatoxin concentration in the in-shell test sample and in all kernels, respectively. © 2011 Taylor & Francis

  18. Fred: a GPU-accelerated fast-Monte Carlo code for rapid treatment plan recalculation in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Schiavi, A.; Senzacqua, M.; Pioli, S.; Mairani, A.; Magro, G.; Molinelli, S.; Ciocca, M.; Battistoni, G.; Patera, V.

    2017-09-01

    Ion beam therapy is a rapidly growing technique for tumor radiation therapy. Ions allow for a high dose deposition in the tumor region, while sparing the surrounding healthy tissue. For this reason, the highest possible accuracy in the calculation of dose and its spatial distribution is required in treatment planning. On one hand, commonly used treatment planning software solutions adopt a simplified beam-body interaction model by remapping pre-calculated dose distributions into a 3D water-equivalent representation of the patient morphology. On the other hand, Monte Carlo (MC) simulations, which explicitly take into account all the details in the interaction of particles with human tissues, are considered to be the most reliable tool to address the complexity of mixed field irradiation in a heterogeneous environment. However, full MC calculations are not routinely used in clinical practice because they typically demand substantial computational resources. Therefore MC simulations are usually only used to check treatment plans for a restricted number of difficult cases. The advent of general-purpose programming GPU cards prompted the development of trimmed-down MC-based dose engines which can significantly reduce the time needed to recalculate a treatment plan with respect to standard MC codes in CPU hardware. In this work, we report on the development of fred, a new MC simulation platform for treatment planning in ion beam therapy. The code can transport particles through a 3D voxel grid using a class II MC algorithm. Both primary and secondary particles are tracked and their energy deposition is scored along the trajectory. Effective models for particle-medium interaction have been implemented, balancing accuracy in dose deposition with computational cost. Currently, the most refined module is the transport of proton beams in water: single pencil beam dose-depth distributions obtained with fred agree with those produced by standard MC codes within 1-2% of the Bragg peak in the therapeutic energy range. A comparison with measurements taken at the CNAO treatment center shows that the lateral dose tails are reproduced within 2% in the field size factor test up to 20 cm. The tracing kernel can run on GPU hardware, achieving 10 million primary s-1 on a single card. This performance allows one to recalculate a proton treatment plan at 1% of the total particles in just a few minutes.

  19. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    PubMed Central

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image-based dosimetry in nuclear medicine. PMID:24200697

  20. The effect of high concentrations of glufosinate ammonium on the yield components of transgenic spring wheat (Triticum aestivum L.) constitutively expressing the bar gene.

    PubMed

    Áy, Zoltán; Mihály, Róbert; Cserháti, Mátyás; Kótai, Éva; Pauk, János

    2012-01-01

    We present an experiment done on a bar(+) wheat line treated with 14 different concentrations of glufosinate ammonium-an effective component of nonselective herbicides-during seed germination in a closed experimental system. Yield components as number of spikes per plant, number of grains per spike, thousand kernel weight, and yield per plant were thoroughly analysed and statistically evaluated after harvesting. We found that a concentration of glufosinate ammonium 5000 times the lethal dose was not enough to inhibit the germination of transgenic plants expressing the bar gene. Extremely high concentrations of glufosinate ammonium caused a bushy phenotype, significantly lower numbers of grains per spike, and thousand kernel weights. Concerning the productivity, we observed that concentrations of glufosinate ammonium 64 times the lethal dose did not lead to yield depression. Our results draw attention to the possibilities implied in the transgenic approaches.

  1. The nonuniformity of antibody distribution in the kidney and its influence on dosimetry.

    PubMed

    Flynn, Aiden A; Pedley, R Barbara; Green, Alan J; Dearling, Jason L; El-Emir, Ethaar; Boxer, Geoffrey M; Boden, Robert; Begent, Richard H J

    2003-02-01

    The therapeutic efficacy of radiolabeled antibody fragments can be limited by nephrotoxicity, particularly when the kidney is the major route of extraction from the circulation. Conventional dose estimates in kidney assume uniform dose deposition, but we have shown increased antibody localization in the cortex after glomerular filtration. The purpose of this study was to measure the radioactivity in cortex relative to medulla for a range of antibodies and to assess the validity of the assumption of uniformity of dose deposition in the whole kidney and in the cortex for these antibodies with a range of radionuclides. Storage phosphor plate technology (radioluminography) was used to acquire images of the distributions of a range of antibodies of various sizes, labeled with 125I, in kidney sections. This allowed the calculation of the antibody concentration in the cortex relative to the medulla. Beta-particle point dose kernels were then used to generate the dose-rate distributions from 14C, 131I, 186Re, 32P and 90Y. The correlation between the actual dose-rate distribution and the corresponding distribution calculated assuming uniform antibody distribution throughout the kidney was used to test the validity of estimating dose by assuming uniformity in the kidney and in the cortex. There was a strong inverse relationship between the ratio of the radioactivity in the cortex relative to that in the medulla and the antibody size. The nonuniformity of dose deposition was greatest with the smallest antibody fragments but became more uniform as the range of the emissions from the radionuclide increased. Furthermore, there was a strong correlation between the actual dose-rate distribution and the distribution when assuming a uniform source in the kidney for intact antibodies along with medium- to long-range radionuclides, but there was no correlation for small antibody fragments with any radioisotope or for short-range radionuclides with any antibody. However, when the cortex was separated from the whole kidney, the correlation between the actual dose-rate distribution and the assumed dose-rate distribution, if the source was uniform, increased significantly. During radioimmunotherapy, the extent of nonuniformity of dose deposition in the kidney depends on the properties of the antibody and radionuclide. For dosimetry estimates, the cortex should be taken as a separate source region when the radiopharmaceutical is small enough to be filtered by the glomerulus.

  2. SU-E-T-104: An Examination of Dose in the Buildup and Build-Down Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tome, W; Kuo, H; Phillips, J

    2015-06-15

    Purpose: To examine dose in the buildup and build-down regions and compare measurements made with various models and dosimeters Methods: Dose was examined in a 30×30cm {sup 2} phantom of water-equivalent plastic with 10cm of backscatter for various field sizes. Examination was performed with radiochromic film and optically-stimulated-luminescent-dosimeter (OSLD) chips, and compared against a plane-parallel chamber with a correction factor applied to approximate the response of an extrapolation chamber. For the build-down region, a correction factor to account for table absorption and chamber orientation in the posterior-anterior direction was applied. The measurement depths used for the film were halfway throughmore » their sensitive volumes, and a polynomial best fit curve was used to determine the dose to their surfaces. This chamber was also compared with the dose expected in a clinical kernel-based computer model, and a clinical Boltzmann-transport-equation-based (BTE) computer model. The two models were also compared against each other for cases with air gaps in the buildup region. Results: Within 3mm, all dosimeters and models agreed with the chamber within 10% for all field sizes. At the entrance surface, film differed in comparison with the chamber from +90% to +15%, the BTE-model by +140 to +3%, and the kernel-based model by +20% to −25%, decreasing with increasing field size. At the exit surface, film differed in comparison with the chamber from −10% to −15%, the BTE-model by −53% to −50%, the kernel-based model by −55% to −57%, mostly independent of field size. Conclusion: The largest differences compared with the chamber were found at the surface for all field sizes. Differences decreased with increasing field size and increasing depth in phantom. Air gaps in the buildup region cause dose buildup to occur again post-gap, but the effect decreases with increasing phantom thickness prior to the gap.« less

  3. What Would a Graph Look Like in this Layout? A Machine Learning Approach to Large Graph Visualization.

    PubMed

    Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu

    2018-01-01

    Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.

  4. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  5. Development of radiation indicators to distinguish between irradiated and non-irradiated herbal medicines using HPLC and GC-MS.

    PubMed

    Kim, Min Jung; Ki, Hyeon A; Kim, Won Young; Pal, Sukdeb; Kim, Byeong Keun; Kang, Woo Suk; Song, Joon Myong

    2010-09-01

    The effects of high dose γ-irradiation on six herbal medicines were investigated using gas chromatography-mass spectrometry (GC/MS) and high-performance liquid chromatography (HPLC). Herbal medicines were irradiated at 0-50 kGy with (60)Co irradiator. HPLC was used to quantify changes of major components including glycyrrhizin, cinnamic acid, poncirin, hesperidin, berberine, and amygdalin in licorice, cinnamon bark, poncirin immature fruit, citrus unshiu peel, coptis rhizome, and apricot kernel. No significant differences were found between gamma-irradiated and non-irradiated samples with regard to the amounts of glycyrrhizin, berberine, and amygdalin. However, the contents of cinnamic acid, poncirin, and hesperidin were increased after irradiation. Volatile compounds were analyzed by GC/MS. The relative proportion of ketone in licorice was diminished after irradiation. The relative amount of hydrocarbons in irradiated cinnamon bark and apricot kernel was higher than that in non-irradiated samples. Therefore, ketone in licorice and hydrocarbons in cinnamon bark and apricot kernel can be considered radiolytic markers. Three unsaturated hydrocarbons, i.e., 1,7,10-hexadecatriene, 6,9-heptadecadiene, and 8-heptadecene, were detected only in apricot kernels irradiated at 25 and 50 kGy. These three hydrocarbons could be used as radiolytic markers to distinguish between irradiated (>25 kGy) and non-irradiated apricot kernels.

  6. Calculation of electron and isotopes dose point kernels with fluka Monte Carlo code for dosimetry in nuclear medicine therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Botta, F.; Mairani, A.; Battistoni, G.

    Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, fluka Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, fluka has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernelmore » (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: fluka DPKs have been calculated in both water and compact bone for monoenergetic electrons (10{sup -3} MeV) and for beta emitting isotopes commonly used for therapy ({sup 89}Sr, {sup 90}Y, {sup 131}I, {sup 153}Sm, {sup 177}Lu, {sup 186}Re, and {sup 188}Re). Point isotropic sources have been simulated at the center of a water (bone) sphere, and deposed energy has been tallied in concentric shells. fluka outcomes have been compared to penelope v.2008 results, calculated in this study as well. Moreover, in case of monoenergetic electrons in water, comparison with the data from the literature (etran, geant4, mcnpx) has been done. Maximum percentage differences within 0.8{center_dot}R{sub CSDA} and 0.9{center_dot}R{sub CSDA} for monoenergetic electrons (R{sub CSDA} being the continuous slowing down approximation range) and within 0.8{center_dot}X{sub 90} and 0.9{center_dot}X{sub 90} for isotopes (X{sub 90} being the radius of the sphere in which 90% of the emitted energy is absorbed) have been computed, together with the average percentage difference within 0.9{center_dot}R{sub CSDA} and 0.9{center_dot}X{sub 90} for electrons and isotopes, respectively. Results: Concerning monoenergetic electrons, within 0.8{center_dot}R{sub CSDA} (where 90%-97% of the particle energy is deposed), fluka and penelope agree mostly within 7%, except for 10 and 20 keV electrons (12% in water, 8.3% in bone). The discrepancies between fluka and the other codes are of the same order of magnitude than those observed when comparing the other codes among them, which can be referred to the different simulation algorithms. When considering the beta spectra, discrepancies notably reduce: within 0.9{center_dot}X{sub 90}, fluka and penelope differ for less than 1% in water and less than 2% in bone with any of the isotopes here considered. Complete data of fluka DPKs are given as Supplementary Material as a tool to perform dosimetry by analytical point kernel convolution. Conclusions: fluka provides reliable results when transporting electrons in the low energy range, proving to be an adequate tool for nuclear medicine dosimetry.« less

  7. Carbothermic Synthesis of 820 m UN Kernels: Literature Review, Thermodynamics, Analysis, and Related Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindemer, Terrence; Voit, Stewart L; Silva, Chinthaka M

    2014-01-01

    The U.S. Department of Energy is considering a new nuclear fuel that would be less susceptible to ruptures during a loss-of-coolant accident. The fuel would consist of tristructural isotropic coated particles with large, dense uranium nitride (UN) kernels. This effort explores many factors involved in using gel-derived uranium oxide-carbon microspheres to make large UN kernels. Analysis of recent studies with sufficient experimental details is provided. Extensive thermodynamic calculations are used to predict carbon monoxide and other pressures for several different reactions that may be involved in conversion of uranium oxides and carbides to UN. Experimentally, the method for making themore » gel-derived microspheres is described. These were used in a microbalance with an attached mass spectrometer to determine details of carbothermic conversion in argon, nitrogen, or vacuum. A quantitative model is derived from experiments for vacuum conversion to an uranium oxide-carbide kernel.« less

  8. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  9. A graphical approach to optimizing variable-kernel smoothing parameters for improved deformable registration of CT and cone beam CT images

    NASA Astrophysics Data System (ADS)

    Hart, Vern; Burrow, Damon; Li, X. Allen

    2017-08-01

    A systematic method is presented for determining optimal parameters in variable-kernel deformable image registration of cone beam CT and CT images, in order to improve accuracy and convergence for potential use in online adaptive radiotherapy. Assessed conditions included the noise constant (symmetric force demons), the kernel reduction rate, the kernel reduction percentage, and the kernel adjustment criteria. Four such parameters were tested in conjunction with reductions of 5, 10, 15, 20, 30, and 40%. Noise constants ranged from 1.0 to 1.9 for pelvic images in ten prostate cancer patients. A total of 516 tests were performed and assessed using the structural similarity index. Registration accuracy was plotted as a function of iteration number and a least-squares regression line was calculated, which implied an average improvement of 0.0236% per iteration. This baseline was used to determine if a given set of parameters under- or over-performed. The most accurate parameters within this range were applied to contoured images. The mean Dice similarity coefficient was calculated for bladder, prostate, and rectum with mean values of 98.26%, 97.58%, and 96.73%, respectively; corresponding to improvements of 2.3%, 9.8%, and 1.2% over previously reported values for the same organ contours. This graphical approach to registration analysis could aid in determining optimal parameters for Demons-based algorithms. It also establishes expectation values for convergence rates and could serve as an indicator of non-physical warping, which often occurred in cases  >0.6% from the regression line.

  10. Wilson Dslash Kernel From Lattice QCD Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joo, Balint; Smelyanskiy, Mikhail; Kalamkar, Dhiraj D.

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show themore » technique gives excellent performance on regular Xeon Architecture as well.« less

  11. The Dispersal and Persistence of Invasive Marine Species

    NASA Astrophysics Data System (ADS)

    Glick, E. R.; Pringle, J.

    2007-12-01

    The spread of invasive marine species is a continuing problem throughout the world, though not entirely understood. Why do some species invade more easily than the rest? How are the range limits of these species set? Recent research (Byers & Pringle 2006, Pringle & Wares 2007) has produced retention criteria that determine whether a coastal species with a benthic adult stage and planktonic larvae can be retained within its range and invade in the direction opposite that of the mean current experienced by the larvae (i.e. upstream). These results however, are only accurate for Gaussian dispersal kernels. For kernels whose kurtosis differs from a Gaussian's, the retention criteria becomes increasingly inaccurate as the mean current increases. Using recent results of Lutscher (2006), we find an improved retention criterion which is much more accurate for non- Gaussian dispersal kernels. The importance of considering non-Gaussian kernels is illustrated for a number of commonly used dispersal kernels, and the relevance of these calculations is illustrated by considering the northward limit of invasion of Hemigrapsus sanguineus, an important invader in the Gulf of Maine.

  12. Geographically weighted regression model on poverty indicator

    NASA Astrophysics Data System (ADS)

    Slamet, I.; Nugroho, N. F. T. A.; Muslich

    2017-12-01

    In this research, we applied geographically weighted regression (GWR) for analyzing the poverty in Central Java. We consider Gaussian Kernel as weighted function. The GWR uses the diagonal matrix resulted from calculating kernel Gaussian function as a weighted function in the regression model. The kernel weights is used to handle spatial effects on the data so that a model can be obtained for each location. The purpose of this paper is to model of poverty percentage data in Central Java province using GWR with Gaussian kernel weighted function and to determine the influencing factors in each regency/city in Central Java province. Based on the research, we obtained geographically weighted regression model with Gaussian kernel weighted function on poverty percentage data in Central Java province. We found that percentage of population working as farmers, population growth rate, percentage of households with regular sanitation, and BPJS beneficiaries are the variables that affect the percentage of poverty in Central Java province. In this research, we found the determination coefficient R2 are 68.64%. There are two categories of district which are influenced by different of significance factors.

  13. Regulation of maize kernel weight and carbohydrate metabolism by abscisic acid applied at the early and middle post-pollination stages in vitro.

    PubMed

    Zhang, Li; Li, Xu-Hui; Gao, Zhen; Shen, Si; Liang, Xiao-Gui; Zhao, Xue; Lin, Shan; Zhou, Shun-Li

    2017-09-01

    Abscisic acid (ABA) accumulates in plants under drought stress, but views on the role of ABA in kernel formation and abortion are not unified. The response of the developing maize kernel to exogenous ABA was investigated by excising kernels from cob sections at four days after pollination and culturing in vitro with different concentrations of ABA (0, 5, 10, 100μM). When ABA was applied at the early post-pollination stage (EPPS), significant weight loss was observed at high ABA concentration (100μM), which could be attributed to jointly affected sink capacity and activity. Endosperm cells and starch granules were decreased significantly with high concentration, and ABA inhibited the activities of soluble acid invertase and acid cell wall invertase, together with earlier attainment of peak values. When ABA was applied at the middle post-pollination stage (MPPS), kernel weight was observably reduced with high concentration and mildly increased with low concentration, which was regulated due to sink activity. The inhibitory effect of high concentration and the mild stimulatory effect of low concentration on sucrose synthase and starch synthase activities were noted, but a peak level of ADP-glucose pyrophosphorylase (AGPase) was stimulated in all ABA treatments. Interestingly, AGPase peak values were advanced by low concentration and postponed by high concentration. In addition, compared with the control, the weight of low ABA concentration treatments were not statistically significant at the two stages, whereas weight loss from high concentration applied at EPPS was considerably obvious compared with that of the MPPS, but neither led to kernel abortion. The temporal- and dose-dependent impacts of ABA reveal a complex process of maize kernel growth and development. Copyright © 2017 Elsevier GmbH. All rights reserved.

  14. Multiple kernel learning in protein-protein interaction extraction from biomedical literature.

    PubMed

    Yang, Zhihao; Tang, Nan; Zhang, Xiao; Lin, Hongfei; Li, Yanpeng; Yang, Zhiwei

    2011-03-01

    Knowledge about protein-protein interactions (PPIs) unveils the molecular mechanisms of biological processes. The volume and content of published biomedical literature on protein interactions is expanding rapidly, making it increasingly difficult for interaction database administrators, responsible for content input and maintenance to detect and manually update protein interaction information. The objective of this work is to develop an effective approach to automatic extraction of PPI information from biomedical literature. We present a weighted multiple kernel learning-based approach for automatic PPI extraction from biomedical literature. The approach combines the following kernels: feature-based, tree, graph and part-of-speech (POS) path. In particular, we extend the shortest path-enclosed tree (SPT) and dependency path tree to capture richer contextual information. Our experimental results show that the combination of SPT and dependency path tree extensions contributes to the improvement of performance by almost 0.7 percentage units in F-score and 2 percentage units in area under the receiver operating characteristics curve (AUC). Combining two or more appropriately weighed individual will further improve the performance. Both on the individual corpus and cross-corpus evaluation our combined kernel can achieve state-of-the-art performance with respect to comparable evaluations, with 64.41% F-score and 88.46% AUC on the AImed corpus. As different kernels calculate the similarity between two sentences from different aspects. Our combined kernel can reduce the risk of missing important features. More specifically, we use a weighted linear combination of individual kernels instead of assigning the same weight to each individual kernel, thus allowing the introduction of each kernel to incrementally contribute to the performance improvement. In addition, SPT and dependency path tree extensions can improve the performance by including richer context information. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. SU-E-T-378: Evaluation of An Analytical Model for the Inter-Seed Attenuation Effect in 103-Pd Multi-Seed Implant Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safigholi, H; Soliman, A; Song, W

    Purpose: Brachytherapy treatment planning systems based on TG-43 protocol calculate the dose in water and neglects the heterogeneity effect of seeds in multi-seed implant brachytherapy. In this research, the accuracy of a novel analytical model that we propose for the inter-seed attenuation effect (ISA) for 103-Pd seed model is evaluated. Methods: In the analytical model, dose perturbation due to the ISA effect for each seed in an LDR multi-seed implant for 103-Pd is calculated by assuming that the seed of interest is active and the other surrounding seeds are inactive. The cumulative dosimetric effect of all seeds is then summedmore » using the superposition principle. The model is based on pre Monte Carlo (MC) simulated 3D kernels of the dose perturbations caused by the ISA effect. The cumulative ISA effect due to multiple surrounding seeds is obtained by a simple multiplication of the individual ISA effect by each seed, the effect of which is determined by the distance from the seed of interest. This novel algorithm is then compared with full MC water-based simulations (FMCW). Results: The results show that the dose perturbation model we propose is in excellent agreement with the FMCW values for a case with three seeds separated by 1 cm. The average difference of the model and the FMCW simulations was less than 8%±2%. Conclusion: Using the proposed novel analytical ISA effect model, one could expedite the corrections due to the ISA dose perturbation effects during permanent seed 103-Pd brachytherapy planning with minimal increase in time since the model is based on multiplications and superposition. This model can be applied, in principle, to any other brachytherapy seeds. Further work is necessary to validate this model on a more complicated geometry as well.« less

  16. Full dose reduction potential of statistical iterative reconstruction for head CT protocols in a predominantly pediatric population

    PubMed Central

    Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.

    2016-01-01

    Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425

  17. Radiobiological characterization of post-lumpectomy focal brachytherapy with lipid nanoparticle-carried radionuclides

    NASA Astrophysics Data System (ADS)

    Hrycushko, Brian A.; Gutierrez, Alonso N.; Goins, Beth; Yan, Weiqiang; Phillips, William T.; Otto, Pamela M.; Bao, Ande

    2011-02-01

    Post-operative radiotherapy has commonly been used for early stage breast cancer to treat residual disease. The primary objective of this work was to characterize, through dosimetric and radiobiological modeling, a novel focal brachytherapy technique which uses direct intracavitary infusion of β-emitting radionuclides (186Re/188Re) carried by lipid nanoparticles (liposomes). Absorbed dose calculations were performed for a spherical lumpectomy cavity with a uniformly injected activity distribution using a dose point kernel convolution technique. Radiobiological indices were used to relate predicted therapy outcome and normal tissue complication of this technique with equivalent external beam radiotherapy treatment regimens. Modeled stromal damage was used as a measure of the inhibition of the stimulatory effect on tumor growth driven by the wound healing response. A sample treatment plan delivering 50 Gy at a therapeutic range of 2.0 mm for 186Re-liposomes and 5.0 mm for 188Re-liposomes takes advantage of the dose delivery characteristics of the β-emissions, providing significant EUD (58.2 Gy and 72.5 Gy for 186Re and 188Re, respectively) with a minimal NTCP (0.046%) of the healthy ipsilateral breast. Modeling of kidney BED and ipsilateral breast NTCP showed that large injected activity concentrations of both radionuclides could be safely administered without significant complications.

  18. Assessing opportunities for physical activity in the built environment of children: interrelation between kernel density and neighborhood scale.

    PubMed

    Buck, Christoph; Kneib, Thomas; Tkaczick, Tobias; Konstabel, Kenn; Pigeot, Iris

    2015-12-22

    Built environment studies provide broad evidence that urban characteristics influence physical activity (PA). However, findings are still difficult to compare, due to inconsistent measures assessing urban point characteristics and varying definitions of spatial scale. Both were found to influence the strength of the association between the built environment and PA. We simultaneously evaluated the effect of kernel approaches and network-distances to investigate the association between urban characteristics and physical activity depending on spatial scale and intensity measure. We assessed urban measures of point characteristics such as intersections, public transit stations, and public open spaces in ego-centered network-dependent neighborhoods based on geographical data of one German study region of the IDEFICS study. We calculated point intensities using the simple intensity and kernel approaches based on fixed bandwidths, cross-validated bandwidths including isotropic and anisotropic kernel functions and considering adaptive bandwidths that adjust for residential density. We distinguished six network-distances from 500 m up to 2 km to calculate each intensity measure. A log-gamma regression model was used to investigate the effect of each urban measure on moderate-to-vigorous physical activity (MVPA) of 400 2- to 9.9-year old children who participated in the IDEFICS study. Models were stratified by sex and age groups, i.e. pre-school children (2 to <6 years) and school children (6-9.9 years), and were adjusted for age, body mass index (BMI), education and safety concerns of parents, season and valid weartime of accelerometers. Association between intensity measures and MVPA strongly differed by network-distance, with stronger effects found for larger network-distances. Simple intensity revealed smaller effect estimates and smaller goodness-of-fit compared to kernel approaches. Smallest variation in effect estimates over network-distances was found for kernel intensity measures based on isotropic and anisotropic cross-validated bandwidth selection. We found a strong variation in the association between the built environment and PA of children based on the choice of intensity measure and network-distance. Kernel intensity measures provided stable results over various scales and improved the assessment compared to the simple intensity measure. Considering different spatial scales and kernel intensity methods might reduce methodological limitations in assessing opportunities for PA in the built environment.

  19. Optimized Orthovoltage Stereotactic Radiosurgery

    NASA Astrophysics Data System (ADS)

    Fagerstrom, Jessica M.

    Because of its ability to treat intracranial targets effectively and noninvasively, stereotactic radiosurgery (SRS) is a prevalent treatment modality in modern radiation therapy. This work focused on SRS delivering rectangular function dose distributions, which are desirable for some targets such as those with functional tissue included within the target volume. In order to achieve such distributions, this work used fluence modulation and energies lower than those utilized in conventional SRS. In this work, the relationship between prescription isodose and dose gradients was examined for standard, unmodulated orthovoltage SRS dose distributions. Monte Carlo-generated energy deposition kernels were used to calculate 4pi, isocentric dose distributions for a polyenergetic orthovoltage spectrum, as well as monoenergetic orthovoltage beams. The relationship between dose gradients and prescription isodose was found to be field size and energy dependent, and values were found for prescription isodose that optimize dose gradients. Next, a pencil-beam model was used with a Genetic Algorithm search heuristic to optimize the spatial distribution of added tungsten filtration within apertures of cone collimators in a moderately filtered 250 kVp beam. Four cone sizes at three depths were examined with a Monte Carlo model to determine the effects of the optimized modulation compared to open cones, and the simulations found that the optimized cones were able to achieve both improved penumbra and flatness statistics at depth compared to the open cones. Prototypes of the filter designs calculated using mathematical optimization techniques and Monte Carlo simulations were then manufactured and inserted into custom built orthovoltage SRS cone collimators. A positioning system built in-house was used to place the collimator and filter assemblies temporarily in the 250 kVp beam line. Measurements were performed in water using radiochromic film scanned with both a standard white light flatbed scanner as well as a prototype laser densitometry system. Measured beam profiles showed that the modulated beams could more closely approach rectangular function dose profiles compared to the open cones. A methodology has been described and implemented to achieve optimized SRS delivery, including the development of working prototypes. Future work may include the construction of a full treatment platform.

  20. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    PubMed

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  1. Pearson correlation estimation for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rehfeld, K.; Marwan, N.; Heitzig, J.; Kurths, J.

    2012-04-01

    Many applications in the geosciences call for the joint and objective analysis of irregular time series. For automated processing, robust measures of linear and nonlinear association are needed. Up to now, the standard approach would have been to reconstruct the time series on a regular grid, using linear or spline interpolation. Interpolation, however, comes with systematic side-effects, as it increases the auto-correlation in the time series. We have searched for the best method to estimate Pearson correlation for irregular time series, i.e. the one with the lowest estimation bias and variance. We adapted a kernel-based approach, using Gaussian weights. Pearson correlation is calculated, in principle, as a mean over products of previously centralized observations. In the regularly sampled case, observations in both time series were observed at the same time and thus the allocation of measurement values into pairs of products is straightforward. In the irregularly sampled case, however, measurements were not necessarily observed at the same time. Now, the key idea of the kernel-based method is to calculate weighted means of products, with the weight depending on the time separation between the observations. If the lagged correlation function is desired, the weights depend on the absolute difference between observation time separation and the estimation lag. To assess the applicability of the approach we used extensive simulations to determine the extent of interpolation side-effects with increasing irregularity of time series. We compared different approaches, based on (linear) interpolation, the Lomb-Scargle Fourier Transform, the sinc kernel and the Gaussian kernel. We investigated the role of kernel bandwidth and signal-to-noise ratio in the simulations. We found that the Gaussian kernel approach offers significant advantages and low Root-Mean Square Errors for regular, slightly irregular and very irregular time series. We therefore conclude that it is a good (linear) similarity measure that is appropriate for irregular time series with skewed inter-sampling time distributions.

  2. Finite-frequency structural sensitivities of short-period compressional body waves

    NASA Astrophysics Data System (ADS)

    Fuji, Nobuaki; Chevrot, Sébastien; Zhao, Li; Geller, Robert J.; Kawai, Kenji

    2012-07-01

    We present an extension of the method recently introduced by Zhao & Chevrot for calculating Fréchet kernels from a precomputed database of strain Green's tensors by normal mode summation. The extension involves two aspects: (1) we compute the strain Green's tensors using the Direct Solution Method, which allows us to go up to frequencies as high as 1 Hz; and (2) we develop a spatial interpolation scheme so that the Green's tensors can be computed with a relatively coarse grid, thus improving the efficiency in the computation of the sensitivity kernels. The only requirement is that the Green's tensors be computed with a fine enough spatial sampling rate to avoid spatial aliasing. The Green's tensors can then be interpolated to any location inside the Earth, avoiding the need to store and retrieve strain Green's tensors for a fine sampling grid. The interpolation scheme not only significantly reduces the CPU time required to calculate the Green's tensor database and the disk space to store it, but also enhances the efficiency in computing the kernels by reducing the number of I/O operations needed to retrieve the Green's tensors. Our new implementation allows us to calculate sensitivity kernels for high-frequency teleseismic body waves with very modest computational resources such as a laptop. We illustrate the potential of our approach for seismic tomography by computing traveltime and amplitude sensitivity kernels for high frequency P, PKP and Pdiff phases. A comparison of our PKP kernels with those computed by asymptotic ray theory clearly shows the limits of the latter. With ray theory, it is not possible to model waves diffracted by internal discontinuities such as the core-mantle boundary, and it is also difficult to compute amplitudes for paths close to the B-caustic of the PKP phase. We also compute waveform partial derivatives for different parts of the seismic wavefield, a key ingredient for high resolution imaging by waveform inversion. Our computations of partial derivatives in the time window where PcP precursors are commonly observed show that the distribution of sensitivity is complex and counter-intuitive, with a large contribution from the mid-mantle region. This clearly emphasizes the need to use accurate and complete partial derivatives in waveform inversion.

  3. Insecticidal effect and impact of fitness of three diatomaceous earths on different maize hybrids for the eco-friendly control of the invasive stored-product pest Prostephanus truncatus (Horn).

    PubMed

    Kavallieratos, Nickolas G; Athanassiou, Christos G; Peteinatos, Gerassimos G; Boukouvala, Maria C; Benelli, Giovanni

    2018-04-01

    Diatomaceous earths (DEs) are able to successfully protect grain commodities from noxious stored-product insect and mite infestations; however, their effectiveness may be moderated by the grain hybrid or variety they are applied to. There is a gap of information on the comparison of the efficacy of different DEs when are applied on different maize hybrids against Prostephanus truncatus (Horn). Therefore, here we tested three commercially available DEs (DEA-P at 75 and 150 ppm, Protect-It at 500 ppm, and PyriSec at 500 ppm) on five different maize hybrids (Calaria, Doxa, Rio Grande, Sisco, and Studio) for the control of P. truncatus adults in terms of mortality (at 7 and 14 days), progeny production, properties of the infested maize hybrids (number and weight of kernels with or without holes, number of holes per kernel) and the adherence level of the tested DEs to the kernels. DEA-P was very effective at 75 ppm while a considerable proportion of the exposed P. truncatus adults was still alive after 14 days of exposure on all maize hybrids treated with 500 ppm of Protect-It or PyriSec, even though it was 3.3 times higher than the maximal application tested dose of DEA-P. Apart from parental mortality, DEA-P was able to reduce P. truncatus progeny production in all hybrids contrary to Protect-It or PyriSec. The adherence ratios were always higher for DEA-P than Protect-It or PyriSec to all maize hybrids. The highest numbers of kernels (or weight of kernels) without holes were noticed after their treatment with DEA-P. Doxa and Sisco performed better than Calaria, Rio Grande, or Studio based on the differences found concerning the numbers of kernels without holes at treatments with DEA-P and Protect-It. Overall, the findings of our study indicate the high potentiality of DEA-P as protectant of different maize hybrids to P. truncatus infestations at low doses, a fact that could help the eco-friendly management of this noxious species in the stored-product environment.

  4. The effect of relatedness and pack size on territory overlap in African wild dogs.

    PubMed

    Jackson, Craig R; Groom, Rosemary J; Jordan, Neil R; McNutt, J Weldon

    2017-01-01

    Spacing patterns mediate competitive interactions between conspecifics, ultimately increasing fitness. The degree of territorial overlap between neighbouring African wild dog ( Lycaon pictus ) packs varies greatly, yet the role of factors potentially affecting the degree of overlap, such as relatedness and pack size, remain unclear. We used movement data from 21 wild dog packs to calculate the extent of territory overlap (20 dyads). On average, unrelated neighbouring packs had low levels of overlap restricted to the peripheral regions of their 95% utilisation kernels. Related neighbours had significantly greater levels of peripheral overlap. Only one unrelated dyad included overlap between 75%-75% kernels, but no 50%-50% kernels overlapped. However, eight of 12 related dyads overlapped between their respective 75% kernels and six between the frequented 50% kernels. Overlap between these more frequented kernels confers a heightened likelihood of encounter, as the mean utilisation intensity per unit area within the 50% kernels was 4.93 times greater than in the 95% kernels, and 2.34 times greater than in the 75% kernels. Related packs spent significantly more time in their 95% kernel overlap zones than did unrelated packs. Pack size appeared to have little effect on overlap between related dyads, yet among unrelated neighbours larger packs tended to overlap more onto smaller packs' territories. However, the true effect is unclear given that the model's confidence intervals overlapped zero. Evidence suggests that costly intraspecific aggression is greatly reduced between related packs. Consequently, the tendency for dispersing individuals to establish territories alongside relatives, where intensively utilised portions of ranges regularly overlap, may extend kin selection and inclusive fitness benefits from the intra-pack to inter-pack level. This natural spacing system can affect survival parameters and the carrying capacity of protected areas, having important management implications for intensively managed populations of this endangered species.

  5. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  6. Seismic Imaging of VTI, HTI and TTI based on Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Rusmanugroho, H.; Tromp, J.

    2014-12-01

    Recent studies show that isotropic seismic imaging based on adjoint method reduces low-frequency artifact caused by diving waves, which commonly occur in two-wave wave-equation migration, such as Reverse Time Migration (RTM). Here, we derive new expressions of sensitivity kernels for Vertical Transverse Isotropy (VTI) using the Thomsen parameters (ɛ, δ, γ) plus the P-, and S-wave speeds (α, β) as well as via the Chen & Tromp (GJI 2005) parameters (A, C, N, L, F). For Horizontal Transverse Isotropy (HTI), these parameters depend on an azimuthal angle φ, where the tilt angle θ is equivalent to 90°, and for Tilted Transverse Isotropy (TTI), these parameters depend on both the azimuth and tilt angles. We calculate sensitivity kernels for each of these two approaches. Individual kernels ("images") are numerically constructed based on the interaction between the regular and adjoint wavefields in smoothed models which are in practice estimated through Full-Waveform Inversion (FWI). The final image is obtained as a result of summing all shots, which are well distributed to sample the target model properly. The impedance kernel, which is a sum of sensitivity kernels of density and the Thomsen or Chen & Tromp parameters, looks crisp and promising for seismic imaging. The other kernels suffer from low-frequency artifacts, similar to traditional seismic imaging conditions. However, all sensitivity kernels are important for estimating the gradient of the misfit function, which, in combination with a standard gradient-based inversion algorithm, is used to minimize the objective function in FWI.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, Travis A.; Kashinath, Karthik; Cavanaugh, Nicholas R.

    Numerous facets of scientific research implicitly or explicitly call for the estimation of probability densities. Histograms and kernel density estimates (KDEs) are two commonly used techniques for estimating such information, with the KDE generally providing a higher fidelity representation of the probability density function (PDF). Both methods require specification of either a bin width or a kernel bandwidth. While techniques exist for choosing the kernel bandwidth optimally and objectively, they are computationally intensive, since they require repeated calculation of the KDE. A solution for objectively and optimally choosing both the kernel shape and width has recently been developed by Bernacchiamore » and Pigolotti (2011). While this solution theoretically applies to multidimensional KDEs, it has not been clear how to practically do so. A method for practically extending the Bernacchia-Pigolotti KDE to multidimensions is introduced. This multidimensional extension is combined with a recently-developed computational improvement to their method that makes it computationally efficient: a 2D KDE on 10 5 samples only takes 1 s on a modern workstation. This fast and objective KDE method, called the fastKDE method, retains the excellent statistical convergence properties that have been demonstrated for univariate samples. The fastKDE method exhibits statistical accuracy that is comparable to state-of-the-science KDE methods publicly available in R, and it produces kernel density estimates several orders of magnitude faster. The fastKDE method does an excellent job of encoding covariance information for bivariate samples. This property allows for direct calculation of conditional PDFs with fastKDE. It is demonstrated how this capability might be leveraged for detecting non-trivial relationships between quantities in physical systems, such as transitional behavior.« less

  8. Increased acetylcholine esterase activity produced by the administration of an aqueous extract of the seed kernel of Thevetia peruviana and its role on acute and subchronic intoxication in mice

    PubMed Central

    Marroquín-Segura, Rubén; Calvillo-Esparza, Ricardo; Mora-Guevara, José Luis Alfredo; Tovalín-Ahumada, José Horacio; Aguilar-Contreras, Abigail; Hernández-Abad, Vicente Jesús

    2014-01-01

    Background: The real mechanism for Thevetia peruviana poisoning remains unclear. Cholinergic activity is important for cardiac function regulation, however, the effect of T. peruviana on cholinergic activity is not well-known. Objective: To study the effect of the acute administration of an aqueous extract of the seed kernel of T. peruviana on the acetylcholine esterase (AChE) activity in CD1 mice as well its implications in the sub-chronic toxicity of the extract. Materials and Methods: A dose of 100 mg/kg of the extract was administered to CD1 mice and after 7 days, serum was obtained for ceruloplasmin (CP) quantitation and liver function tests. Another group of mice received a 50 mg/kg dose of the extract 3 times within 1 h time interval and AChE activity was determined for those animals. Heart tissue histological preparation was obtained from a group of mice that received a daily 50 mg/kg dose of the extract by a 30-days period. Results: CP levels for the treated group were higher than those for the control group (Student's t-test, P ≤ 0.001). AChE activity in the treated group was significantly higher than the control group (Tukey test, control vs. T. peruviana, P ≤ 0.001). Heart tissue histological preparations showed leukocyte infiltrates and necrotic areas, consistent with infarcts. Conclusion: The increased levels of AChE and the hearth tissue infiltrative lesions induced by the aqueous seed kernel extract of T. peruviana explains in part the poisoning caused by this plant, which can be related to an inflammatory process. PMID:24914300

  9. Increased acetylcholine esterase activity produced by the administration of an aqueous extract of the seed kernel of Thevetia peruviana and its role on acute and subchronic intoxication in mice.

    PubMed

    Marroquín-Segura, Rubén; Calvillo-Esparza, Ricardo; Mora-Guevara, José Luis Alfredo; Tovalín-Ahumada, José Horacio; Aguilar-Contreras, Abigail; Hernández-Abad, Vicente Jesús

    2014-01-01

    The real mechanism for Thevetia peruviana poisoning remains unclear. Cholinergic activity is important for cardiac function regulation, however, the effect of T. peruviana on cholinergic activity is not well-known. To study the effect of the acute administration of an aqueous extract of the seed kernel of T. peruviana on the acetylcholine esterase (AChE) activity in CD1 mice as well its implications in the sub-chronic toxicity of the extract. A dose of 100 mg/kg of the extract was administered to CD1 mice and after 7 days, serum was obtained for ceruloplasmin (CP) quantitation and liver function tests. Another group of mice received a 50 mg/kg dose of the extract 3 times within 1 h time interval and AChE activity was determined for those animals. Heart tissue histological preparation was obtained from a group of mice that received a daily 50 mg/kg dose of the extract by a 30-days period. CP levels for the treated group were higher than those for the control group (Student's t-test, P ≤ 0.001). AChE activity in the treated group was significantly higher than the control group (Tukey test, control vs. T. peruviana, P ≤ 0.001). Heart tissue histological preparations showed leukocyte infiltrates and necrotic areas, consistent with infarcts. The increased levels of AChE and the hearth tissue infiltrative lesions induced by the aqueous seed kernel extract of T. peruviana explains in part the poisoning caused by this plant, which can be related to an inflammatory process.

  10. Maize kernel antioxidants and their potential involvement in Fusarium ear rot resistance.

    PubMed

    Picot, Adeline; Atanasova-Pénichon, Vessela; Pons, Sebastien; Marchegay, Gisèle; Barreau, Christian; Pinson-Gadais, Laëtitia; Roucolle, Joël; Daveau, Florie; Caron, Daniel; Richard-Forget, Florence

    2013-04-10

    The potential involvement of antioxidants (α-tocopherol, lutein, zeaxanthin, β-carotene, and ferulic acid) in the resistance of maize varieties to Fusarium ear rot was the focus of this study. These antioxidants were present in all maize kernel stages, indicating that the fumonisin-producing fungi (mainly Fusarium verticillioides and Fusarium proliferatum ) are likely to face them during ear colonization. The effect of these compounds on fumonisin biosynthesis was studied in F. verticillioides liquid cultures. In carotenoid-treated cultures, no inhibitory effect of fumonisin accumulation was observed while a potent inhibitory activity was obtained for sublethal doses of α-tocopherol (0.1 mM) and ferulic acid (1 mM). Using a set of genotypes with moderate to high susceptibility to Fusarium ear rot, ferulic acid was significantly lower in immature kernels of the very susceptible group. Such a relation was nonexistent for tocopherols and carotenoids. Also, ferulic acid in immature kernels ranged from 3 to 8.5 mg/g, i.e., at levels consistent with the in vitro inhibitory concentration. Overall, our data support the fact that ferulic acid may contribute to resistance to Fusarium ear rot and/or fumonisin accumulation.

  11. Efficient approach to obtain free energy gradient using QM/MM MD simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asada, Toshio; Koseki, Shiro; The Research Institute for Molecular Electronic Devices

    2015-12-31

    The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means ofmore » FEG and the nudged elastic band (NEB) method.« less

  12. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  13. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging.

    PubMed

    Nana, Roger; Hu, Xiaoping

    2010-01-01

    k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.

  14. Accelerating the Original Profile Kernel.

    PubMed

    Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard

    2013-01-01

    One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.

  15. Removal of Malachite Green Dye by Mangifera indica Seed Kernel Powder

    NASA Astrophysics Data System (ADS)

    Singh, Dilbagh; Sowmya, V.; Abinandan, S.; Shanthakumar, S.

    2017-11-01

    In this study, batch experiments were carried out to study the adsorption of Malachite green dye from aqueous solution by Mangifera indica (mango) seed kernel powder. The mango seed kernel powder was characterized by Fourier transform infrared spectroscopy and scanning electron microscopy. Effect of various parameters including pH, contact time, adsorbent dosage, initial dye concentration and temperature on adsorption capacity of the adsorbent was observed and the optimized condition for maximum dye removal was identified. Maximum percentage removal of 96% was achieved with an adsorption capacity of 22.8 mg/g at pH 6 with an initial concentration of 100 mg/l. The equilibrium data were examined to fit the Langmuir and Freundlich isotherm models. Thermodynamic parameters for the adsorption process were also calculated.

  16. SU-G-BRC-02: A Novel Multi-Criteria Optimization Approach to Generate Deliverable Intensity-Modulated Radiation Therapy (IMRT) Treatment Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirlik, G; D’Souza, W; Zhang, H

    2016-06-15

    Purpose: To present a novel multi-criteria optimization (MCO) solution approach that generates treatment plans with deliverable apertures using column generation. Methods: We demonstrate our method with 10 locally advanced head-and-neck cancer cases retrospectively. In our MCO formulation, we defined an objective function for each structure in the treatment volume. This resulted in 9 objective functions, including 3 distinct objectives for primary target volume, high-risk and low-risk target volumes, 5 objectives for each of the organs-at-risk (OARs) (two parotid glands, spinal cord, brain stem and oral cavity), and one for the non-target non-OAR normal tissue. Conditional value-at-risk (CVaR) constraints were utilizedmore » to ensure at least certain fraction of the target volumes receiving the prescription doses. To directly generate deliverable plans, column generation algorithm was embedded within our MCO approach for aperture shape generation. Final dose distributions for all plans were generated using a Monte Carlo kernel-superposition dose calculation. We compared the MCO plans with the clinical plans, which were created by clinicians. Results: At least 95% target coverage was achieved by both MCO plans and clinical plans. However, the average conformity indices of clinical plans and the MCO plans were 1.95 and 1.35, respectively (31% reduction, p<0.01). Compared to the conventional clinical plan, the proposed MCO method achieved average reductions in left parotid mean dose of 5% (p=0.06), right parotid mean dose of 18% (p<0.01), oral cavity mean dose of 21% (p=0.03), spinal cord maximum dose of 20% (p<0.01), brain stem maximum dose of 61% (p<0.01), and normal tissue maximum dose of 5% (p<0.01), respectively. Conclusion: We demonstrated that the proposed MCO method was able to obtain deliverable IMRT treatment plans while achieving significant improvements in dosimetric plan quality.« less

  17. Directional interstitial brachytherapy from simulation to application

    NASA Astrophysics Data System (ADS)

    Lin, Liyong

    Organs at risk (OAR) are sometimes adjacent to or embedded in or overlap with the clinical target volume (CTV) to be treated. The purpose of this PhD study is to develop directionally low energy gamma-emitting interstitial brachytherapy sources. These sources can be applied between OAR to selectively reduce hot spots in the OARs and normal tissues. The reduction of dose over undesired regions can expand patient eligibility or reduce toxicities for the treatment by conventional interstitial brachytherapy. This study covers the development of a directional source from design optimization to construction of the first prototype source. The Monte Carlo code MCNP was used to simulate the radiation transport for the designs of directional sources. We have made a special construction kit to assemble radioactive and gold-shield components precisely into D-shaped titanium containers of the first directional source. Directional sources have a similar dose distribution as conventional sources on the treated side but greatly reduced dose on the shielded side, with a sharp dose gradient between them. A three-dimensional dose deposition kernel for the 125I directional source has been calculated. Treatment plans can use both directional and conventional 125I sources at the same source strength for low-dose-rate (LDR) implants to optimize the dose distributions. For prostate tumors, directional 125I LDR brachytherapy can potentially reduce genitourinary and gastrointestinal toxicities and improve potency preservation for low risk patients. The combination of better dose distribution of directional implants and better therapeutic ratio between tumor response and late reactions enables a novel temporary LDR treatment, as opposed to permanent or high-dose-rate (HDR) brachytherapy for the intermediate risk T2b and high risk T2c tumors. Supplemental external-beam treatments can be shortened with a better brachytherapy boost for T3 tumors. In conclusion, we have successfully finished the design optimization and construction of the first prototype directional source. Potential clinical applications and potential benefits of directional sources have been shown for prostate and breast tumors.

  18. Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.

    PubMed

    Lima, Clodoaldo A M; Coelho, André L V

    2011-10-01

    We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.

  19. SU-D-207B-07: Development of a CT-Radiomics Based Early Response Prediction Model During Delivery of Chemoradiation Therapy for Pancreatic Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klawikowski, S; Christian, J; Schott, D

    Purpose: Pilot study developing a CT-texture based model for early assessment of treatment response during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Methods: Daily CT data acquired for 24 pancreatic head cancer patients using CT-on-rails, during the routine CT-guided CRT delivery with a radiation dose of 50.4 Gy in 28 fractions, were analyzed. The pancreas head was contoured on each daily CT. Texture analysis was performed within the pancreas head contour using a research tool (IBEX). Over 1300 texture metrics including: grey level co-occurrence, run-length, histogram, neighborhood intensity difference, and geometrical shape features were calculated for each dailymore » CT. Metric-trend information was established by finding the best fit of either a linear, quadratic, or exponential function for each metric value verses accumulated dose. Thus all the daily CT texture information was consolidated into a best-fit trend type for a given patient and texture metric. Linear correlation was performed between the patient histological response vector (good, medium, poor) and all combinations of 23 patient subgroups (statistical jackknife) determining which metrics were most correlated to response and repeatedly reliable across most patients. Control correlations against CT scanner, reconstruction kernel, and gated/nongated CT images were also calculated. Euclidean distance measure was used to group/sort patient vectors based on the data of these trend-response metrics. Results: We found four specific trend-metrics (Gray Level Coocurence Matrix311-1InverseDiffMomentNorm, Gray Level Coocurence Matrix311-1InverseDiffNorm, Gray Level Coocurence Matrix311-1 Homogeneity2, and Intensity Direct Local StdMean) that were highly correlated with patient response and repeatedly reliable. Our four trend-metric model successfully ordered our pilot response dataset (p=0.00070). We found no significant correlation to our control parameters: gating (p=0.7717), scanner (p=0.9741), and kernel (p=0.8586). Conclusion: We have successfully created a CT-texture based early treatment response prediction model using the CTs acquired during the delivery of chemoradiation therapy for pancreatic cancer. Future testing is required to validate the model with more patient data.« less

  20. TU-G-204-09: The Effects of Reduced- Dose Lung Cancer Screening CT On Lung Nodule Detection Using a CAD Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Lo, P; Kim, G

    2015-06-15

    Purpose: While Lung Cancer Screening CT is being performed at low doses, the purpose of this study was to investigate the effects of further reducing dose on the performance of a CAD nodule-detection algorithm. Methods: We selected 50 cases from our local database of National Lung Screening Trial (NLST) patients for which we had both the image series and the raw CT data from the original scans. All scans were acquired with fixed mAs (25 for standard-sized patients, 40 for large patients) on a 64-slice scanner (Sensation 64, Siemens Healthcare). All images were reconstructed with 1-mm slice thickness, B50 kernel.more » 10 of the cases had at least one nodule reported on the NLST reader forms. Based on a previously-published technique, we added noise to the raw data to simulate reduced-dose versions of each case at 50% and 25% of the original NLST dose (i.e. approximately 1.0 and 0.5 mGy CTDIvol). For each case at each dose level, the CAD detection algorithm was run and nodules greater than 4 mm in diameter were reported. These CAD results were compared to “truth”, defined as the approximate nodule centroids from the NLST reports. Subject-level mean sensitivities and false-positive rates were calculated for each dose level. Results: The mean sensitivities of the CAD algorithm were 35% at the original dose, 20% at 50% dose, and 42.5% at 25% dose. The false-positive rates, in decreasing-dose order, were 3.7, 2.9, and 10 per case. In certain cases, particularly in larger patients, there were severe photon-starvation artifacts, especially in the apical region due to the high-attenuating shoulders. Conclusion: The detection task was challenging for the CAD algorithm at all dose levels, including the original NLST dose. However, the false-positive rate at 25% dose approximately tripled, suggesting a loss of CAD robustness somewhere between 0.5 and 1.0 mGy. NCI grant U01 CA181156 (Quantitative Imaging Network); Tobacco Related Disease Research Project grant 22RT-0131.« less

  1. Advanced Fuels for LWRs: Fully-Ceramic Microencapsulated and Related Concepts FY 2012 Interim Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Sonat Sen; Brian Boer; John D. Bess

    2012-03-01

    This report summarizes the progress in the Deep Burn project at Idaho National Laboratory during the first half of fiscal year 2012 (FY2012). The current focus of this work is on Fully-Ceramic Microencapsulated (FCM) fuel containing low-enriched uranium (LEU) uranium nitride (UN) fuel kernels. UO2 fuel kernels have not been ruled out, and will be examined as later work in FY2012. Reactor physics calculations confirmed that the FCM fuel containing 500 mm diameter kernels of UN fuel has positive MTC with a conventional fuel pellet radius of 4.1 mm. The methodology was put into place and validated against MCNP tomore » perform whole-core calculations using DONJON, which can interpolate cross sections from a library generated using DRAGON. Comparisons to MCNP were performed on the whole core to confirm the accuracy of the DRAGON/DONJON schemes. A thermal fluid coupling scheme was also developed and implemented with DONJON. This is currently able to iterate between diffusion calculations and thermal fluid calculations in order to update fuel temperatures and cross sections in whole-core calculations. Now that the DRAGON/DONJON calculation capability is in place and has been validated against MCNP results, and a thermal-hydraulic capability has been implemented in the DONJON methodology, the work will proceed to more realistic reactor calculations. MTC calculations at the lattice level without the correct burnable poison are inadequate to guarantee zero or negative values in a realistic mode of operation. Using the DONJON calculation methodology described in this report, a startup core with enrichment zoning and burnable poisons will be designed. Larger fuel pins will be evaluated for their ability to (1) alleviate the problem of positive MTC and (2) increase reactivity-limited burnup. Once the critical boron concentration of the startup core is determined, MTC will be calculated to verify a non-positive value. If the value is positive, the design will be changed to require less soluble boron by, for example, increasing the reactivity hold-down by burnable poisons. Then, the whole core analysis will be repeated until an acceptable design is found. Calculations of departure from nucleate boiling ratio (DNBR) will be included in the safety evaluation as well. Once a startup core is shown to be viable, subsequent reloads will be simulated by shuffling fuel and introducing fresh fuel. The PASTA code has been updated with material properties of UN fuel from literature and a model for the diffusion and release of volatile fission products from the SiC matrix material . Preliminary simulations have been performed for both normal conditions and elevated temperatures. These results indicated that the fuel performs well and that the SiC matrix has a good retention of the fission products. The path forward for fuel performance work includes improvement of metallic fission product release from the kernel. Results should be considered preliminary and further validation is required.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, J; Lindsay, P; University of Toronto, Toronto

    Purpose: Recent progress in small animal radiotherapy systems has provided the foundation for delivering the heterogeneous, millimeter scale dose distributions demanded by preclinical radiobiology investigations. Despite advances in preclinical dose planning, delivery of highly heterogeneous dose distributions is constrained by the fixed collimation systems and large x-ray focal spot common in small animal radiotherapy systems. This work proposes a dual focal spot dose optimization and delivery method with a large x-ray focal spot used to deliver homogeneous dose regions and a small focal spot to paint spatially heterogeneous dose regions. Methods: Two-dimensional dose kernels were measured for a 1 mmmore » circular collimator with radiochromic film at 10 mm depth in a solid water phantom for the small and large x-ray focal spots on a recently developed small animal microirradiator. These kernels were used in an optimization framework which segmented a desired dose distribution into low- and high-spatial frequency regions for delivery by the large and small focal spot, respectively. For each region, the method determined an optimal set of stage positions and beam-on times. The method was demonstrated by optimizing a bullseye pattern consisting of 0.75 mm radius circular target and 0.5 and 1.0 mm wide rings alternating between 0 and 2 Gy. Results: Compared to a large focal spot technique, the dual focal spot technique improved the optimized dose distribution: 69.2% of the optimized dose was within 0.5 Gy of the intended dose for the large focal spot, compared to 80.6% for the dual focal spot method. The dual focal spot design required 14.0 minutes of optimization, and will require 178.3 minutes for automated delivery. Conclusion: The dual focal spot optimization and delivery framework is a novel option for delivering conformal and heterogeneous dose distributions at the preclinical level and provides a new experimental option for unique radiobiological investigations. Funding Support: this work is supported by funding the National Sciences and Engineering Research Council of Canada, and a Mitacs-accelerate fellowship. Conflict of Interest: Dr. Lindsay and Dr. Jaffray are listed as inventors of the small animal microirradiator described herein. This system has been licensed for commercial development.« less

  3. Toward an alternative hardness kernel matrix structure in the Electronegativity Equalization Method (EEM).

    PubMed

    Chaves, J; Barroso, J M; Bultinck, P; Carbó-Dorca, R

    2006-01-01

    This study presents an alternative of the Electronegativity Equalization Method (EEM), where the usual Coulomb kernel has been transformed into a smooth function. The new framework, as the classical EEM, permits fast calculations of atomic charges in a given molecule for a small computational cost. The original EEM procedure needs to previously calibrate the different implied atomic hardness and electronegativity, using a chosen set of molecules. In the new EEM algorithm half the number of parameters needs to be calibrated, since a relationship between electronegativities and hardnesses has been found.

  4. A 3D Ginibre Point Field

    NASA Astrophysics Data System (ADS)

    Kargin, Vladislav

    2018-06-01

    We introduce a family of three-dimensional random point fields using the concept of the quaternion determinant. The kernel of each field is an n-dimensional orthogonal projection on a linear space of quaternionic polynomials. We find explicit formulas for the basis of the orthogonal quaternion polynomials and for the kernel of the projection. For number of particles n → ∞, we calculate the scaling limits of the point field in the bulk and at the center of coordinates. We compare our construction with the previously introduced Fermi-sphere point field process.

  5. Propagation and Directional Scattering of Ocean Waves in the Marginal Ice Zone and Neighboring Seas

    DTIC Science & Technology

    2015-09-30

    expected to be the average of the kernel for 10 s and 12 s. This means that we should be able to calculate empirical formulas for 2 the scattering kernel...floe packing. Thus, establish a way to incorporate what has been done by Squire and co-workers into the wave model paradigm (in which the phase of the...cases observed by Kohout et al. (2014) in Antarctica . vii. Validation: We are planning validation tests for wave-ice scattering / attenuation model by

  6. Using Adjoint Methods to Improve 3-D Velocity Models of Southern California

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Tape, C.; Maggi, A.; Tromp, J.

    2006-12-01

    We use adjoint methods popular in climate and ocean dynamics to calculate Fréchet derivatives for tomographic inversions in southern California. The Fréchet derivative of an objective function χ(m), where m denotes the Earth model, may be written in the generic form δχ=int Km(x) δln m(x) d3x, where δln m=δ m/m denotes the relative model perturbation. For illustrative purposes, we construct the 3-D finite-frequency banana-doughnut kernel Km, corresponding to the misfit of a single traveltime measurement, by simultaneously computing the 'adjoint' wave field s† forward in time and reconstructing the regular wave field s backward in time. The adjoint wave field is produced by using the time-reversed velocity at the receiver as a fictitious source, while the regular wave field is reconstructed on the fly by propagating the last frame of the wave field saved by a previous forward simulation backward in time. The approach is based upon the spectral-element method, and only two simulations are needed to produce density, shear-wave, and compressional-wave sensitivity kernels. This method is applied to the SCEC southern California velocity model. Various density, shear-wave, and compressional-wave sensitivity kernels are presented for different phases in the seismograms. We also generate 'event' kernels for Pnl, S and surface waves, which are the Fréchet kernels of misfit functions that measure the P, S or surface wave traveltime residuals at all the receivers simultaneously for one particular event. Effectively, an event kernel is a sum of weighted Fréchet kernels, with weights determined by the associated traveltime anomalies. By the nature of the 3-D simulation, every event kernel is also computed based upon just two simulations, i.e., its construction costs the same amount of computation time as an individual banana-doughnut kernel. One can think of the sum of the event kernels for all available earthquakes, called the 'misfit' kernel, as a graphical representation of the gradient of the misfit function. With the capability of computing both the value of the misfit function and its gradient, which assimilates the traveltime anomalies, we are ready to use a non-linear conjugate gradient algorithm to iteratively improve velocity models of southern California.

  7. Bose–Einstein condensation temperature of finite systems

    NASA Astrophysics Data System (ADS)

    Xie, Mi

    2018-05-01

    In studies of the Bose–Einstein condensation of ideal gases in finite systems, the divergence problem usually arises in the equation of state. In this paper, we present a technique based on the heat kernel expansion and zeta function regularization to solve the divergence problem, and obtain the analytical expression of the Bose–Einstein condensation temperature for general finite systems. The result is represented by the heat kernel coefficients, where the asymptotic energy spectrum of the system is used. Besides the general case, for systems with exact spectra, e.g. ideal gases in an infinite slab or in a three-sphere, the sums of the spectra can be obtained exactly and the calculation of corrections to the critical temperatures is more direct. For a system confined in a bounded potential, the form of the heat kernel is different from the usual heat kernel expansion. We show that as long as the asymptotic form of the global heat kernel can be found, our method works. For Bose gases confined in three- and two-dimensional isotropic harmonic potentials, we obtain the higher-order corrections to the usual results of the critical temperatures. Our method can also be applied to the problem of generalized condensation, and we give the correction of the boundary on the second critical temperature in a highly anisotropic slab.

  8. Lévy processes on a generalized fractal comb

    NASA Astrophysics Data System (ADS)

    Sandev, Trifce; Iomin, Alexander; Méndez, Vicenç

    2016-09-01

    Comb geometry, constituted of a backbone and fingers, is one of the most simple paradigm of a two-dimensional structure, where anomalous diffusion can be realized in the framework of Markov processes. However, the intrinsic properties of the structure can destroy this Markovian transport. These effects can be described by the memory and spatial kernels. In particular, the fractal structure of the fingers, which is controlled by the spatial kernel in both the real and the Fourier spaces, leads to the Lévy processes (Lévy flights) and superdiffusion. This generalization of the fractional diffusion is described by the Riesz space fractional derivative. In the framework of this generalized fractal comb model, Lévy processes are considered, and exact solutions for the probability distribution functions are obtained in terms of the Fox H-function for a variety of the memory kernels, and the rate of the superdiffusive spreading is studied by calculating the fractional moments. For a special form of the memory kernels, we also observed a competition between long rests and long jumps. Finally, we considered the fractal structure of the fingers controlled by a Weierstrass function, which leads to the power-law kernel in the Fourier space. This is a special case, when the second moment exists for superdiffusion in this competition between long rests and long jumps.

  9. Design of a modulated orthovoltage stereotactic radiosurgery system.

    PubMed

    Fagerstrom, Jessica M; Bender, Edward T; Lawless, Michael J; Culberson, Wesley S

    2017-07-01

    To achieve stereotactic radiosurgery (SRS) dose distributions with sharp gradients using orthovoltage energy fluence modulation with inverse planning optimization techniques. A pencil beam model was used to calculate dose distributions from an orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods. A Genetic Algorithm search heuristic was used to optimize the spatial distribution of added tungsten filtration to achieve dose distributions with sharp dose gradients. Optimizations were performed for depths of 2.5, 5.0, and 7.5 cm, with cone sizes of 5, 6, 8, and 10 mm. In addition to the beam profiles, 4π isocentric irradiation geometries were modeled to examine dose at 0.07 mm depth, a representative skin depth, for the low energy beams. Profiles from 4π irradiations of a constant target volume, assuming maximally conformal coverage, were compared. Finally, dose deposition in bone compared to tissue in this energy range was examined. Based on the results of the optimization, circularly symmetric tungsten filters were designed to modulate the orthovoltage beam across the apertures of SRS cone collimators. For each depth and cone size combination examined, the beam flatness and 80-20% and 90-10% penumbrae were calculated for both standard, open cone-collimated beams as well as for optimized, filtered beams. For all configurations tested, the modulated beam profiles had decreased penumbra widths and flatness statistics at depth. Profiles for the optimized, filtered orthovoltage beams also offered decreases in these metrics compared to measured linear accelerator cone-based SRS profiles. The dose at 0.07 mm depth in the 4π isocentric irradiation geometries was higher for the modulated beams compared to unmodulated beams; however, the modulated dose at 0.07 mm depth remained <0.025% of the central, maximum dose. The 4π profiles irradiating a constant target volume showed improved statistics for the modulated, filtered distribution compared to the standard, open cone-collimated distribution. Simulations of tissue and bone confirmed previously published results that a higher energy beam (≥ 200 keV) would be preferable, but the 250 kVp beam was chosen for this work because it is available for future measurements. A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions with decreased flatness and penumbra statistics compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system. © 2017 American Association of Physicists in Medicine.

  10. High resolution digital autoradiographic and dosimetric analysis of heterogeneous radioactivity distribution in xenografted prostate tumors.

    PubMed

    Timmermand, Oskar V; Nilsson, Jenny; Strand, Sven-Erik; Elgqvist, Jörgen

    2016-12-01

    The first main aim of this study was to illustrate the absorbed dose rate distribution from 177 Lu in sections of xenografted prostate cancer (PCa) tumors using high resolution digital autoradiography (DAR) and compare it with hypothetical identical radioactivity distributions of 90 Y or 7 MeV alpha-particles. Three dosimetry models based on either dose point kernels or Monte Carlo simulations were used and evaluated. The second and overlapping aim, was to perform DAR imaging and dosimetric analysis of the distribution of radioactivity, and hence the absorbed dose rate, in tumor sections at an early time point after injection during radioimmunotherapy using 177 Lu-h11B6, directed against the human kallikrein 2 antigen. Male immunodeficient BALB/c nude mice, aged 6-8 w, were inoculated by subcutaneous injection of ∼10 7 LNCaP cells in a 200 μl suspension of a 1:1 mixture of medium and Matrigel. The antibody h11B6 was conjugated with the chelator CHX-A″-DTPA after which conjugated h11B6 was mixed with 177 LuCl 3 . The incubation was performed at room temperature for 2 h, after which the labeling was terminated and the solution was purified on a NAP-5 column. About 20 MBq 177 Lu-h11B6 was injected intravenously in the tail vein. At approximately 10 h postinjection (hpi), the mice were sacrificed and one tumor was collected from each of the five animals and cryosectioned into 10 μm thick slices. The tumor slices were measured and imaged using the DAR MicroImager system and the M3Vision software. Then the absorbed dose rate was calculated using a dose point kernel generated with the Monte Carlo code gate v7.0. The DAR system produced high resolution images of the radioactivity distribution, close to the resolution of single PCa cells. The DAR images revealed a pronounced heterogeneous radioactivity distribution, i.e., count rate per area, in the tumors, indicated by the normalized intensity variations along cross sections as mean ± SD: 0.15 ± 0.15, 0.20 ± 0.18, 0.12 ± 0.17, 0.15 ± 0.16, and 0.23 ± 0.22, for each tumor section, respectively. The absorbed dose rate distribution for 177 Lu at the time of dissection 10 hpi showed a maximum value of 2.9 ± 0.4 Gy/h (mean ± SD), compared to 6.0 ± 0.9 and 159 ± 25 Gy/h for the hypothetical 90 Y and 7 MeV alpha-particle cases assuming the same count rate densities. Mean absorbed dose rate values were 0.13, 0.53, and 6.43 Gy/h for 177 Lu, 90 Y, and alpha-particles, respectively. The initial uptake of 177 Lu-h11B6 produces a high absorbed dose rate, which is important for a successful therapeutic outcome. The hypothetical 90 Y case indicates a less heterogeneous absorbed dose rate distribution and a higher mean absorbed dose rate compared to 177 Lu, although with a potentially increased irradiation of surrounding healthy tissue. The hypothetical alpha-particle case indicates the possibility of a higher maximum absorbed dose rate, although with a more heterogeneous absorbed dose rate distribution.

  11. SU-E-T-214: Intensity Modulated Proton Therapy (IMPT) Based On Passively Scattered Protons and Multi-Leaf Collimation: Prototype TPS and Dosimetry Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez-Parcerisa, D; Carabe-Fernandez, A

    2014-06-01

    Purpose. Intensity-modulated proton therapy is usually implemented with multi-field optimization of pencil-beam scanning (PBS) proton fields. However, at the view of the experience with photon-IMRT, proton facilities equipped with double-scattering (DS) delivery and multi-leaf collimation (MLC) could produce highly conformal dose distributions (and possibly eliminate the need for patient-specific compensators) with a clever use of their MLC field shaping, provided that an optimal inverse TPS is developed. Methods. A prototype TPS was developed in MATLAB. The dose calculation process was based on a fluence-dose algorithm on an adaptive divergent grid. A database of dose kernels was precalculated in order tomore » allow for fast variations of the field range and modulation during optimization. The inverse planning process was based on the adaptive simulated annealing approach, with direct aperture optimization of the MLC leaves. A dosimetry study was performed on a phantom formed by three concentrical semicylinders separated by 5 mm, of which the inner-most and outer-most were regarded as organs at risk (OARs), and the middle one as the PTV. We chose a concave target (which is not treatable with conventional DS fields) to show the potential of our technique. The optimizer was configured to minimize the mean dose to the OARs while keeping a good coverage of the target. Results. The plan produced by the prototype TPS achieved a conformity index of 1.34, with the mean doses to the OARs below 78% of the prescribed dose. This Result is hardly achievable with traditional conformal DS technique with compensators, and it compares to what can be obtained with PBS. Conclusion. It is certainly feasible to produce IMPT fields with MLC passive scattering fields. With a fully developed treatment planning system, the produced plans can be superior to traditional DS plans in terms of plan conformity and dose to organs at risk.« less

  12. WE-AB-207B-07: Dose Cloud: Generating “Big Data” for Radiation Therapy Treatment Plan Optimization Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Folkerts, MM; University of California San Diego, La Jolla, California; Long, T

    Purpose: To provide a tool to generate large sets of realistic virtual patient geometries and beamlet doses for treatment optimization research. This tool enables countless studies exploring the fundamental interplay between patient geometry, objective functions, weight selections, and achievable dose distributions for various algorithms and modalities. Methods: Generating realistic virtual patient geometries requires a small set of real patient data. We developed a normalized patient shape model (PSM) which captures organ and target contours in a correspondence-preserving manner. Using PSM-processed data, we perform principal component analysis (PCA) to extract major modes of variation from the population. These PCA modes canmore » be shared without exposing patient information. The modes are re-combined with different weights to produce sets of realistic virtual patient contours. Because virtual patients lack imaging information, we developed a shape-based dose calculation (SBD) relying on the assumption that the region inside the body contour is water. SBD utilizes a 2D fluence-convolved scatter kernel, derived from Monte Carlo simulations, and can compute both full dose for a given set of fluence maps, or produce a dose matrix (dose per fluence pixel) for many modalities. Combining the shape model with SBD provides the data needed for treatment plan optimization research. Results: We used PSM to capture organ and target contours for 96 prostate cases, extracted the first 20 PCA modes, and generated 2048 virtual patient shapes by randomly sampling mode scores. Nearly half of the shapes were thrown out for failing anatomical checks, the remaining 1124 were used in computing dose matrices via SBD and a standard 7-beam protocol. As a proof of concept, and to generate data for later study, we performed fluence map optimization emphasizing PTV coverage. Conclusions: We successfully developed and tested a tool for creating customizable sets of virtual patients suitable for large-scale radiation therapy optimization research.« less

  13. [Study on application of SVM in prediction of coronary heart disease].

    PubMed

    Zhu, Yue; Wu, Jianghua; Fang, Ying

    2013-12-01

    Base on the data of blood pressure, plasma lipid, Glu and UA by physical test, Support Vector Machine (SVM) was applied to identify coronary heart disease (CHD) in patients and non-CHD individuals in south China population for guide of further prevention and treatment of the disease. Firstly, the SVM classifier was built using radial basis kernel function, liner kernel function and polynomial kernel function, respectively. Secondly, the SVM penalty factor C and kernel parameter sigma were optimized by particle swarm optimization (PSO) and then employed to diagnose and predict the CHD. By comparison with those from artificial neural network with the back propagation (BP) model, linear discriminant analysis, logistic regression method and non-optimized SVM, the overall results of our calculation demonstrated that the classification performance of optimized RBF-SVM model could be superior to other classifier algorithm with higher accuracy rate, sensitivity and specificity, which were 94.51%, 92.31% and 96.67%, respectively. So, it is well concluded that SVM could be used as a valid method for assisting diagnosis of CHD.

  14. Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression

    NASA Astrophysics Data System (ADS)

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-08-01

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.

  15. Reducing Disk Storage of Full-3D Seismic Waveform Tomography (F3DT) Through Lossy Online Compression

    DOE PAGES

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-05-05

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithmmore » into our F3DT SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.« less

  16. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  17. Efficient exact-exchange time-dependent density-functional theory methods and their relation to time-dependent Hartree-Fock.

    PubMed

    Hesselmann, Andreas; Görling, Andreas

    2011-01-21

    A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.

  18. Xyloglucans from flaxseed kernel cell wall: Structural and conformational characterisation.

    PubMed

    Ding, Huihuang H; Cui, Steve W; Goff, H Douglas; Chen, Jie; Guo, Qingbin; Wang, Qi

    2016-10-20

    The structure of ethanol precipitated fraction from 1M KOH extracted flaxseed kernel polysaccharides (KPI-EPF) was studied for better understanding the molecular structures of flaxseed kernel cell wall polysaccharides. Based on methylation/GC-MS, NMR spectroscopy, and MALDI-TOF-MS analysis, the dominate sugar residues of KPI-EPF fraction comprised of (1,4,6)-linked-β-d-glucopyranose (24.1mol%), terminal α-d-xylopyranose (16.2mol%), (1,2)-α-d-linked-xylopyranose (10.7mol%), (1,4)-β-d-linked-glucopyranose (10.7mol%), and terminal β-d-galactopyranose (8.5mol%). KPI-EPF was proposed as xyloglucans: The substitution rate of the backbone is 69.3%; R1 could be T-α-d-Xylp-(1→, or none; R2 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, or T-α-l-Araf-(1→2)-α-d-Xylp-(1→; R3 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, T-α-l-Fucp-(1→2)-β-d-Galp-(1→2)-α-d-Xylp-(1→, or none. The Mw of KPI-EPF was calculated to be 1506kDa by static light scattering (SLS). The structure-sensitive parameter (ρ) of KPI-EPF was calculated as 1.44, which confirmed the highly branched structure of extracted xyloglucans. This new findings on flaxseed kernel xyloglucans will be helpful for understanding its fermentation properties and potential applications. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  19. Dosimetry for nonuniform activity distributions: a method for the calculation of 3D absorbed-dose distribution without the use of voxel S-values, point kernels, or Monte Carlo simulations.

    PubMed

    Traino, A C; Marcatili, S; Avigo, C; Sollini, M; Erba, P A; Mariani, G

    2013-04-01

    Nonuniform activity within the target lesions and the critical organs constitutes an important limitation for dosimetric estimates in patients treated with tumor-seeking radiopharmaceuticals. The tumor control probability and the normal tissue complication probability are affected by the distribution of the radionuclide in the treated organ/tissue. In this paper, a straightforward method for calculating the absorbed dose at the voxel level is described. This new method takes into account a nonuniform activity distribution in the target/organ. The new method is based on the macroscopic S-values (i.e., the S-values calculated for the various organs, as defined in the MIRD approach), on the definition of the number of voxels, and on the raw-count 3D array, corrected for attenuation, scatter, and collimator resolution, in the lesion/organ considered. Starting from these parameters, the only mathematical operation required is to multiply the 3D array by a scalar value, thus avoiding all the complex operations involving the 3D arrays. A comparison with the MIRD approach, fully described in the MIRD Pamphlet No. 17, using S-values at the voxel level, showed a good agreement between the two methods for (131)I and for (90)Y. Voxel dosimetry is becoming more and more important when performing therapy with tumor-seeking radiopharmaceuticals. The method presented here does not require calculating the S-values at the voxel level, and thus bypasses the mathematical problems linked to the convolution of 3D arrays and to the voxel size. In the paper, the results obtained with this new simplified method as well as the possibility of using it for other radionuclides commonly employed in therapy are discussed. The possibility of using the correct density value of the tissue/organs involved is also discussed.

  20. Effects of study area size on home range estimates of common bottlenose dolphins Tursiops truncatus

    PubMed Central

    Nekolny, Samantha R; Denny, Matthew; Biedenbach, George; Howells, Elisabeth M; Mazzoil, Marilyn; Durden, Wendy N; Moreland, Lydia; David Lambert, J

    2017-01-01

    Abstract Knowledge of an animal’s home range is a crucial component in making informed management decisions. However, many home range studies are limited by study area size, and therefore may underestimate the size of the home range. In many cases, individuals have been shown to travel outside of the study area and utilize a larger area than estimated by the study design. In this study, data collected by multiple research groups studying bottlenose dolphins on the east coast of Florida were combined to determine how home range estimates increased with increasing study area size. Home range analyses utilized photo-identification data collected from 6 study areas throughout the St Johns River (SJR; Jacksonville, FL, USA) and adjacent waterways, extending a total of 253 km to the southern end of Mosquito Lagoon in the Indian River Lagoon Estuarine System. Univariate kernel density estimates (KDEs) were computed for individuals with 10 or more sightings (n = 20). Kernels were calculated for the primary study area (SJR) first, then additional kernels were calculated by combining the SJR and the next adjacent waterway; this continued in an additive fashion until all study areas were included. The 95% and 50% KDEs calculated for the SJR alone ranged from 21 to 35 km and 4 to 19 km, respectively. The 95% and 50% KDEs calculated for all combined study areas ranged from 116 to 217 km and 9 to 70 km, respectively. This study illustrates the degree to which home range may be underestimated by the use of limited study areas and demonstrates the benefits of conducting collaborative science. PMID:29492031

  1. Effects of study area size on home range estimates of common bottlenose dolphins Tursiops truncatus.

    PubMed

    Nekolny, Samantha R; Denny, Matthew; Biedenbach, George; Howells, Elisabeth M; Mazzoil, Marilyn; Durden, Wendy N; Moreland, Lydia; David Lambert, J; Gibson, Quincy A

    2017-12-01

    Knowledge of an animal's home range is a crucial component in making informed management decisions. However, many home range studies are limited by study area size, and therefore may underestimate the size of the home range. In many cases, individuals have been shown to travel outside of the study area and utilize a larger area than estimated by the study design. In this study, data collected by multiple research groups studying bottlenose dolphins on the east coast of Florida were combined to determine how home range estimates increased with increasing study area size. Home range analyses utilized photo-identification data collected from 6 study areas throughout the St Johns River (SJR; Jacksonville, FL, USA) and adjacent waterways, extending a total of 253 km to the southern end of Mosquito Lagoon in the Indian River Lagoon Estuarine System. Univariate kernel density estimates (KDEs) were computed for individuals with 10 or more sightings ( n =  20). Kernels were calculated for the primary study area (SJR) first, then additional kernels were calculated by combining the SJR and the next adjacent waterway; this continued in an additive fashion until all study areas were included. The 95% and 50% KDEs calculated for the SJR alone ranged from 21 to 35 km and 4 to 19 km, respectively. The 95% and 50% KDEs calculated for all combined study areas ranged from 116 to 217 km and 9 to 70 km, respectively. This study illustrates the degree to which home range may be underestimated by the use of limited study areas and demonstrates the benefits of conducting collaborative science.

  2. SU-E-T-236: Deconvolution of the Total Nuclear Cross-Sections of Therapeutic Protons and the Characterization of the Reaction Channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulmer, W.

    2015-06-15

    Purpose: The knowledge of the total nuclear cross-section Qtot(E) of therapeutic protons Qtot(E) provides important information in advanced radiotherapy with protons, such as the decrease of fluence of primary protons, the release of secondary particles (neutrons, protons, deuterons, etc.), and the production of nuclear fragments (heavy recoils), which usually undergo β+/− decay by emission of γ-quanta. Therefore determination of Qtot(E) is an important tool for sophisticated calculation algorithms of dose distributions. This cross-section can be determined by a linear combination of shifted Gaussian kernels and an error-function. The resonances resulting from deconvolutions in the energy space can be associated withmore » typical nuclear reactions. Methods: The described method of the determination of Qtot(E) results from an extension of the Breit-Wigner formula and a rather extended version of the nuclear shell theory to include nuclear correlation effects, clusters and highly excited/virtually excited nuclear states. The elastic energy transfer of protons to nucleons (the quantum numbers of the target nucleus remain constant) can be removed by the mentioned deconvolution. Results: The deconvolution of the term related to the error-function of the type cerf*er((E-ETh)/σerf] is the main contribution to obtain various nuclear reactions as resonances, since the elastic part of energy transfer is removed. The nuclear products of various elements of therapeutic interest like oxygen, calcium are classified and calculated. Conclusions: The release of neutrons is completely underrated, in particular, for low-energy protons. The transport of seconary particles, e.g. cluster formation by deuterium, tritium and α-particles, show an essential contribution to secondary particles, and the heavy recoils, which create γ-quanta by decay reactions, lead to broadening of the scatter profiles. These contributions cannot be accounted for by one single Gaussian kernel for the description of lateral scatter.« less

  3. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    NASA Technical Reports Server (NTRS)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  4. Milne problem for non-absorbing medium with extremely anisotropic scattering kernel in the case of specular and diffuse reflecting boundaries

    NASA Astrophysics Data System (ADS)

    Güleçyüz, M. Ç.; Şenyiğit, M.; Ersoy, A.

    2018-01-01

    The Milne problem is studied in one speed neutron transport theory using the linearly anisotropic scattering kernel which combines forward and backward scatterings (extremely anisotropic scattering) for a non-absorbing medium with specular and diffuse reflection boundary conditions. In order to calculate the extrapolated endpoint for the Milne problem, Legendre polynomial approximation (PN method) is applied and numerical results are tabulated for selected cases as a function of different degrees of anisotropic scattering. Finally, some results are discussed and compared with the existing results in literature.

  5. Calculation of the time resolution of the J-PET tomograph using kernel density estimation

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Wiślicki, W.; Krzemień, W.; Kowalski, P.; Alfs, D.; Bednarski, T.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Rundel, O.; Sharma, N. G.; Silarski, M.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.

    2017-06-01

    In this paper we estimate the time resolution of the J-PET scanner built from plastic scintillators. We incorporate the method of signal processing using the Tikhonov regularization framework and the kernel density estimation method. We obtain simple, closed-form analytical formulae for time resolution. The proposed method is validated using signals registered by means of the single detection unit of the J-PET tomograph built from a 30 cm long plastic scintillator strip. It is shown that the experimental and theoretical results obtained for the J-PET scanner equipped with vacuum tube photomultipliers are consistent.

  6. Sensitivity kernels for viscoelastic loading based on adjoint methods

    NASA Astrophysics Data System (ADS)

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity kernel' Kη determines the linearized sensitivity of J to viscosity perturbations defined with respect to a laterally heterogeneous reference earth model, while the `rate-of-loading kernel' K_{dot{σ }} determines the sensitivity to variations in the time derivative of the surface load. By restricting attention to spherically symmetric viscosity perturbations, we also obtain a `radial viscosity kernel' overline{K}_{η } such that the associated contribution to δJ can be written int _{IS}overline{K}_{η }δ ln η dr, where IS denotes the subset of radii lying in solid regions. In order to illustrate this theory, we describe its numerical implementation in the case of a spherically symmetric earth model using a 1-D spectral element method, and calculate sensitivity kernels for a range of realistic observables.

  7. The spatial sensitivity of Sp converted waves—scattered-wave kernels and their applications to receiver-function migration and inversion

    NASA Astrophysics Data System (ADS)

    Mancinelli, N. J.; Fischer, K. M.

    2018-03-01

    We characterize the spatial sensitivity of Sp converted waves to improve constraints on lateral variations in uppermost-mantle velocity gradients, such as the lithosphere-asthenosphere boundary (LAB) and the mid-lithospheric discontinuities. We use SPECFEM2D to generate 2-D scattering kernels that relate perturbations from an elastic half-space to Sp waveforms. We then show that these kernels can be well approximated using ray theory, and develop an approach to calculating kernels for layered background models. As proof of concept, we show that lateral variations in uppermost-mantle discontinuity structure are retrieved by implementing these scattering kernels in the first iteration of a conjugate-directions inversion algorithm. We evaluate the performance of this technique on synthetic seismograms computed for 2-D models with undulations on the LAB of varying amplitude, wavelength and depth. The technique reliably images the position of discontinuities with dips <35° and horizontal wavelengths >100-200 km. In cases of mild topography on a shallow LAB, the relative brightness of the LAB and Moho converters approximately agrees with the ratio of velocity contrasts across the discontinuities. Amplitude retrieval degrades at deeper depths. For dominant periods of 4 s, the minimum station spacing required to produce unaliased results is 5 km, but the application of a Gaussian filter can improve discontinuity imaging where station spacing is greater.

  8. Selection and properties of alternative forming fluids for TRISO fuel kernel production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, M. P.; King, J. C.; Gorman, B. P.

    2013-01-01

    Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardousmore » alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1- bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.« less

  9. Application of stochastic weighted algorithms to a multidimensional silica particle model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menz, William J.; Patterson, Robert I.A.; Wagner, Wolfgang

    2013-09-01

    Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associatedmore » majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.« less

  10. Selection and properties of alternative forming fluids for TRISO fuel kernel production

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Marshall, D. W.

    2013-01-01

    Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of ˜10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 °C and 80 °C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory fuel kernels.

  11. RBF kernel based support vector regression to estimate the blood volume and heart rate responses during hemodialysis.

    PubMed

    Javed, Faizan; Chan, Gregory S H; Savkin, Andrey V; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H

    2009-01-01

    This paper uses non-linear support vector regression (SVR) to model the blood volume and heart rate (HR) responses in 9 hemodynamically stable kidney failure patients during hemodialysis. Using radial bias function (RBF) kernels the non-parametric models of relative blood volume (RBV) change with time as well as percentage change in HR with respect to RBV were obtained. The e-insensitivity based loss function was used for SVR modeling. Selection of the design parameters which includes capacity (C), insensitivity region (e) and the RBF kernel parameter (sigma) was made based on a grid search approach and the selected models were cross-validated using the average mean square error (AMSE) calculated from testing data based on a k-fold cross-validation technique. Linear regression was also applied to fit the curves and the AMSE was calculated for comparison with SVR. For the model based on RBV with time, SVR gave a lower AMSE for both training (AMSE=1.5) as well as testing data (AMSE=1.4) compared to linear regression (AMSE=1.8 and 1.5). SVR also provided a better fit for HR with RBV for both training as well as testing data (AMSE=15.8 and 16.4) compared to linear regression (AMSE=25.2 and 20.1).

  12. The effect of carbon crystal structure on treat reactor physics calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swanson, R.W.; Harrison, L.J.

    1988-01-01

    The Transient Reactor Test Facility (TREAT) at Argonne National Laboratory-West (ANL-W) is fueled with urania in a graphite and carbon mixture. This fuel was fabricated from a mixture of graphite flour, thermax (a thermatomic carbon produced by ''cracking'' natural gas), coal-tar resin and U/sub 3/O/sub 8/. During the fabrication process, the fuel was baked to dissociate the resin, but the high temperature necessary to graphitize the carbon in the thermax and in the resin was avoided. Therefore, the carbon crystal structure is a complex mixture of graphite particles in a nongraphitized elemental carbon matrix. Results of calculations using macroscopic carbonmore » cross sections obtained by mixing bound-kernel graphite cross sections for the graphitized carbon and free-gas carbon cross sections for the remainder of the carbon and calculations using only bound-kernel graphite cross sections are compared to experimental data. It is shown that the use of the hybridized cross sections which reflect the allotropic mixture of the carbon in the TREAT fuel results in a significant improvement in the accuracy of calculated neutronics parameters for the TREAT reactor. 6 refs., 2 figs., 3 tabs.« less

  13. Production of Low Enriched Uranium Nitride Kernels for TRISO Particle Irradiation Testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMurray, J. W.; Silva, C. M.; Helmreich, G. W.

    2016-06-01

    A large batch of UN microspheres to be used as kernels for TRISO particle fuel was produced using carbothermic reduction and nitriding of a sol-gel feedstock bearing tailored amounts of low-enriched uranium (LEU) oxide and carbon. The process parameters, established in a previous study, produced phasepure NaCl structure UN with dissolved C on the N sublattice. The composition, calculated by refinement of the lattice parameter from X-ray diffraction, was determined to be UC 0.27N 0.73. The final accepted product weighed 197.4 g. The microspheres had an average diameter of 797±1.35 μm and a composite mean theoretical density of 89.9±0.5% formore » a solid solution of UC and UN with the same atomic ratio; both values are reported with their corresponding calculated standard error.« less

  14. Learning molecular energies using localized graph kernels.

    PubMed

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-21

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  15. Learning molecular energies using localized graph kernels

    NASA Astrophysics Data System (ADS)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  16. Progress in understanding fission-product behaviour in coated uranium-dioxide fuel particles

    NASA Astrophysics Data System (ADS)

    Barrachin, M.; Dubourg, R.; Kissane, M. P.; Ozrin, V.

    2009-03-01

    Supported by results of calculations performed with two analytical tools (MFPR, which takes account of physical and chemical mechanisms in calculating the chemical forms and physical locations of fission products in UO2, and MEPHISTA, a thermodynamic database), this paper presents an investigation of some important aspects of the fuel microstructure and chemical evolutions of irradiated TRISO particles. The following main conclusions can be identified with respect to irradiated TRISO fuel: first, the relatively low oxygen potential within the fuel particles with respect to PWR fuel leads to chemical speciation that is not typical of PWR fuels, e.g., the relatively volatile behaviour of barium; secondly, the safety-critical fission-product caesium is released from the urania kernel but the buffer and pyrolytic-carbon coatings could form an important chemical barrier to further migration (i.e., formation of carbides). Finally, significant releases of fission gases from the urania kernel are expected even in nominal conditions.

  17. TH-AB-207A-12: CT Lung Cancer Screening and the Effects of Further Dose Reduction On CAD Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, S; Lo, P; Hoffman, J

    Purpose: CT lung screening is already performed at low doses. In this study, we investigated the effects of further dose reduction on a lung-nodule CAD detection algorithm. Methods: The original raw CT data and images from 348 patients were obtained from our local database of National Lung Screening Trial (NLST) cases. 61 patients (17.5%) had at least one nodule reported on the NLST reader forms. All scans were acquired with fixed mAs (25 for standard-sized patients, 40 for large patients) on a 64-slice scanner (Sensation 64, Siemens Healthcare). All images were reconstructed with 1-mm slice thickness, B50 kernel. Based onmore » a previously-published technique, we added noise to the raw data to simulate reduced-dose versions of each case at 50% and 25% of the original NLST dose (i.e. approximately 1.0 and 0.5 mGy CTDIvol). For each case at each dose level, a CAD detection algorithm was run and nodules greater than 4 mm in diameter were reported. These CAD results were compared to “truth”, defined as the approximate nodule centroids from the NLST forms. Sensitivities and false-positive rates (FPR) were calculated for each dose level, with a sub-analysis by nodule LungRADS category. Results: For larger category 4 nodules, median sensitivities were 100% at all three dose levels, and mean sensitivity decreased with dose. For the more challenging category 2 and 3 nodules, the dose dependence was less obvious. Overall, mean subject-level sensitivity varied from 38.5% at 100% dose to 40.4% at 50% dose, a difference of only 1.9%. However, median FPR quadrupled from 1 per case at 100% dose to 4 per case at 25% dose. Conclusions: Dose reduction affected nodule detectability differently depending on the LungRADS category, and FPR was very sensitive at sub-screening levels. Care should be taken to adapt CAD for the very challenging noise characteristics of screening. Funding support: NIH U01 CA181156; Disclosures (McNitt-Gray): Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less

  18. Generalized time-dependent Schrödinger equation in two dimensions under constraints

    NASA Astrophysics Data System (ADS)

    Sandev, Trifce; Petreska, Irina; Lenzi, Ervin K.

    2018-01-01

    We investigate a generalized two-dimensional time-dependent Schrödinger equation on a comb with a memory kernel. A Dirac delta term is introduced in the Schrödinger equation so that the quantum motion along the x-direction is constrained at y = 0. The wave function is analyzed by using Green's function approach for several forms of the memory kernel, which are of particular interest. Closed form solutions for the cases of Dirac delta and power-law memory kernels in terms of Fox H-function, as well as for a distributed order memory kernel, are obtained. Further, a nonlocal term is also introduced and investigated analytically. It is shown that the solution for such a case can be represented in terms of infinite series in Fox H-functions. Green's functions for each of the considered cases are analyzed and plotted for the most representative ones. Anomalous diffusion signatures are evident from the presence of the power-law tails. The normalized Green's functions obtained in this work are of broader interest, as they are an important ingredient for further calculations and analyses of some interesting effects in the transport properties in low-dimensional heterogeneous media.

  19. Evaluating and interpreting the chemical relevance of the linear response kernel for atoms II: open shell.

    PubMed

    Boisdenghien, Zino; Fias, Stijn; Van Alsenoy, Christian; De Proft, Frank; Geerlings, Paul

    2014-07-28

    Most of the work done on the linear response kernel χ(r,r') has focussed on its atom-atom condensed form χAB. Our previous work [Boisdenghien et al., J. Chem. Theory Comput., 2013, 9, 1007] was the first effort to truly focus on the non-condensed form of this function for closed (sub)shell atoms in a systematic fashion. In this work, we extend our method to the open shell case. To simplify the plotting of our results, we average our results to a symmetrical quantity χ(r,r'). This allows us to plot the linear response kernel for all elements up to and including argon and to investigate the periodicity throughout the first three rows in the periodic table and in the different representations of χ(r,r'). Within the context of Spin Polarized Conceptual Density Functional Theory, the first two-dimensional plots of spin polarized linear response functions are presented and commented on for some selected cases on the basis of the atomic ground state electronic configurations. Using the relation between the linear response kernel and the polarizability we compare the values of the polarizability tensor calculated using our method to high-level values.

  20. Fission Product Release and Survivability of UN-Kernel LWR TRISO Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besmann, Theodore M; Ferber, Mattison K; Lin, Hua-Tay

    2014-01-01

    A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from range calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 m diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated with a TRISO particle as a function of fluence. Creep and swelling of the innermore » and outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by measuring the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers as a function of fluence. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less

  1. Fission product release and survivability of UN-kernel LWR TRISO fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T. M. Besmann; M. K. Ferber; H.-T. Lin

    2014-05-01

    A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from fission product recoil calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 um diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated within a TRISO particle undergoing burnup. Creep and swelling of the inner andmore » outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by computing the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers from internal pressure and thermomechanics of the layers. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less

  2. Modeling electro-magneto-hydrodynamic thermo-fluidic transport of biofluids with new trend of fractional derivative without singular kernel

    NASA Astrophysics Data System (ADS)

    Abdulhameed, M.; Vieru, D.; Roslan, R.

    2017-10-01

    This paper investigates the electro-magneto-hydrodynamic flow of the non-Newtonian behavior of biofluids, with heat transfer, through a cylindrical microchannel. The fluid is acted by an arbitrary time-dependent pressure gradient, an external electric field and an external magnetic field. The governing equations are considered as fractional partial differential equations based on the Caputo-Fabrizio time-fractional derivatives without singular kernel. The usefulness of fractional calculus to study fluid flows or heat and mass transfer phenomena was proven. Several experimental measurements led to conclusion that, in such problems, the models described by fractional differential equations are more suitable. The most common time-fractional derivative used in Continuum Mechanics is Caputo derivative. However, two disadvantages appear when this derivative is used. First, the definition kernel is a singular function and, secondly, the analytical expressions of the problem solutions are expressed by generalized functions (Mittag-Leffler, Lorenzo-Hartley, Robotnov, etc.) which, generally, are not adequate to numerical calculations. The new time-fractional derivative Caputo-Fabrizio, without singular kernel, is more suitable to solve various theoretical and practical problems which involve fractional differential equations. Using the Caputo-Fabrizio derivative, calculations are simpler and, the obtained solutions are expressed by elementary functions. Analytical solutions of the biofluid velocity and thermal transport are obtained by means of the Laplace and finite Hankel transforms. The influence of the fractional parameter, Eckert number and Joule heating parameter on the biofluid velocity and thermal transport are numerically analyzed and graphic presented. This fact can be an important in Biochip technology, thus making it possible to use this analysis technique extremely effective to control bioliquid samples of nanovolumes in microfluidic devices used for biological analysis and medical diagnosis.

  3. The modulation transfer function and signal-to-noise ratio of different digital filters: a technical approach.

    PubMed

    Brüllmann, D D; d'Hoedt, B

    2011-05-01

    The aim of this study was to illustrate the influence of digital filters on the signal-to-noise ratio (SNR) and modulation transfer function (MTF) of digital images. The article will address image pre-processing that may be beneficial for the production of clinically useful digital radiographs with lower radiation dose. Three filters, an arithmetic mean filter, a median filter and a Gaussian filter (standard deviation (SD) = 0.4), with kernel sizes of 3 × 3 pixels and 5 × 5 pixels were tested. Synthetic images with exactly increasing amounts of Gaussian noise were created to gather linear regression of SNR before and after application of digital filters. Artificial stripe patterns with defined amounts of line pairs per millimetre were used to calculate MTF before and after the application of the digital filters. The Gaussian filter with a 5 × 5 kernel size caused the highest noise suppression (SNR increased from 2.22, measured in the synthetic image, to 11.31 in the filtered image). The smallest noise reduction was found with the 3 × 3 median filter. The application of the median filters resulted in no changes in MTF at the different resolutions but did result in the deletion of smaller structures. The 5 × 5 Gaussian filter and the 5 × 5 arithmetic mean filter showed the strongest changes of MTF. The application of digital filters can improve the SNR of a digital sensor; however, MTF can be adversely affected. As such, imaging systems should not be judged solely on their quoted spatial resolutions because pre-processing may influence image quality.

  4. Evaluation of automatic image quality assessment in chest CT - A human cadaver study.

    PubMed

    Franck, Caro; De Crop, An; De Roo, Bieke; Smeets, Peter; Vergauwen, Merel; Dewaele, Tom; Van Borsel, Mathias; Achten, Eric; Van Hoof, Tom; Bacher, Klaus

    2017-04-01

    The evaluation of clinical image quality (IQ) is important to optimize CT protocols and to keep patient doses as low as reasonably achievable. Considering the significant amount of effort needed for human observer studies, automatic IQ tools are a promising alternative. The purpose of this study was to evaluate automatic IQ assessment in chest CT using Thiel embalmed cadavers. Chest CT's of Thiel embalmed cadavers were acquired at different exposures. Clinical IQ was determined by performing a visual grading analysis. Physical-technical IQ (noise, contrast-to-noise and contrast-detail) was assessed in a Catphan phantom. Soft and sharp reconstructions were made with filtered back projection and two strengths of iterative reconstruction. In addition to the classical IQ metrics, an automatic algorithm was used to calculate image quality scores (IQs). To be able to compare datasets reconstructed with different kernels, the IQs values were normalized. Good correlations were found between IQs and the measured physical-technical image quality: noise (ρ=-1.00), contrast-to-noise (ρ=1.00) and contrast-detail (ρ=0.96). The correlation coefficients between IQs and the observed clinical image quality of soft and sharp reconstructions were 0.88 and 0.93, respectively. The automatic scoring algorithm is a promising tool for the evaluation of thoracic CT scans in daily clinical practice. It allows monitoring of the image quality of a chest protocol over time, without human intervention. Different reconstruction kernels can be compared after normalization of the IQs. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Development of full wave code for modeling RF fields in hot non-uniform plasmas

    NASA Astrophysics Data System (ADS)

    Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo

    2016-10-01

    FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.

  6. Full Wave Parallel Code for Modeling RF Fields in Hot Plasmas

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2015-11-01

    FAR-TECH, Inc. is developing a suite of full wave RF codes in hot plasmas. It is based on a formulation in configuration space with grid adaptation capability. The conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating the linearized Vlasov equation along unperturbed test particle orbits. For Tokamak applications a 2-D version of the code is being developed. Progress of this work will be reported. This suite of codes has the following advantages over existing spectral codes: 1) It utilizes the localized nature of plasma dielectric response to the RF field and calculates this response numerically without approximations. 2) It uses an adaptive grid to better resolve resonances in plasma and antenna structures. 3) It uses an efficient sparse matrix solver to solve the formulated linear equations. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel is calculated. Work is supported by the U.S. DOE SBIR program.

  7. C6 Coefficients and Dipole Polarizabilities for All Atoms and Many Ions in Rows 1-6 of the Periodic Table.

    PubMed

    Gould, Tim; Bučko, Tomáš

    2016-08-09

    Using time-dependent density functional theory (TDDFT) with exchange kernels, we calculate and test imaginary frequency-dependent dipole polarizabilities for all atoms and many ions in rows 1-6 of the periodic table. These are then integrated over frequency to produce C6 coefficients. Results are presented under different models: straight TDDFT calculations using two different kernels; "benchmark" TDDFT calculations corrected by more accurate quantum chemical and experimental data; and "benchmark" TDDFT with frozen orbital anions. Parametrizations are presented for 411+ atoms and ions, allowing results to be easily used by other researchers. A curious relationship, C6,XY ∝ [αX(0)αY(0)](0.73), is found between C6 coefficients and static polarizabilities α(0). The relationship C6,XY = 2C6,XC6,Y/[(αX/αY)C6,Y + (αY/αX)C6,X] is tested and found to work well (<5% errors) in ∼80% of the cases, but can break down badly (>30% errors) in a small fraction of cases.

  8. Construction of phylogenetic trees by kernel-based comparative analysis of metabolic networks.

    PubMed

    Oh, S June; Joung, Je-Gun; Chang, Jeong-Ho; Zhang, Byoung-Tak

    2006-06-06

    To infer the tree of life requires knowledge of the common characteristics of each species descended from a common ancestor as the measuring criteria and a method to calculate the distance between the resulting values of each measure. Conventional phylogenetic analysis based on genomic sequences provides information about the genetic relationships between different organisms. In contrast, comparative analysis of metabolic pathways in different organisms can yield insights into their functional relationships under different physiological conditions. However, evaluating the similarities or differences between metabolic networks is a computationally challenging problem, and systematic methods of doing this are desirable. Here we introduce a graph-kernel method for computing the similarity between metabolic networks in polynomial time, and use it to profile metabolic pathways and to construct phylogenetic trees. To compare the structures of metabolic networks in organisms, we adopted the exponential graph kernel, which is a kernel-based approach with a labeled graph that includes a label matrix and an adjacency matrix. To construct the phylogenetic trees, we used an unweighted pair-group method with arithmetic mean, i.e., a hierarchical clustering algorithm. We applied the kernel-based network profiling method in a comparative analysis of nine carbohydrate metabolic networks from 81 biological species encompassing Archaea, Eukaryota, and Eubacteria. The resulting phylogenetic hierarchies generally support the tripartite scheme of three domains rather than the two domains of prokaryotes and eukaryotes. By combining the kernel machines with metabolic information, the method infers the context of biosphere development that covers physiological events required for adaptation by genetic reconstruction. The results show that one may obtain a global view of the tree of life by comparing the metabolic pathway structures using meta-level information rather than sequence information. This method may yield further information about biological evolution, such as the history of horizontal transfer of each gene, by studying the detailed structure of the phylogenetic tree constructed by the kernel-based method.

  9. Finite-frequency sensitivity kernels for head waves

    NASA Astrophysics Data System (ADS)

    Zhang, Zhigang; Shen, Yang; Zhao, Li

    2007-11-01

    Head waves are extremely important in determining the structure of the predominantly layered Earth. While several recent studies have shown the diffractive nature and the 3-D Fréchet kernels of finite-frequency turning waves, analogues of head waves in a continuous velocity structure, the finite-frequency effects and sensitivity kernels of head waves are yet to be carefully examined. We present the results of a numerical study focusing on the finite-frequency effects of head waves. Our model has a low-velocity layer over a high-velocity half-space and a cylindrical-shaped velocity perturbation placed beneath the interface at different locations. A 3-D finite-difference method is used to calculate synthetic waveforms. Traveltime and amplitude anomalies are measured by the cross-correlation of synthetic seismograms from models with and without the velocity perturbation and are compared to the 3-D sensitivity kernels constructed from full waveform simulations. The results show that the head wave arrival-time and amplitude are influenced by the velocity structure surrounding the ray path in a pattern that is consistent with the Fresnel zones. Unlike the `banana-doughnut' traveltime sensitivity kernels of turning waves, the traveltime sensitivity of the head wave along the ray path below the interface is weak, but non-zero. Below the ray path, the traveltime sensitivity reaches the maximum (absolute value) at a depth that depends on the wavelength and propagation distance. The sensitivity kernels vary with the vertical velocity gradient in the lower layer, but the variation is relatively small at short propagation distances when the vertical velocity gradient is within the range of the commonly accepted values. Finally, the depression or shoaling of the interface results in increased or decreased sensitivities, respectively, beneath the interface topography.

  10. Static and dynamic behaviour of nonlocal elastic bar using integral strain-based and peridynamic models

    NASA Astrophysics Data System (ADS)

    Challamel, Noël

    2018-04-01

    The static and dynamic behaviour of a nonlocal bar of finite length is studied in this paper. The nonlocal integral models considered in this paper are strain-based and relative displacement-based nonlocal models; the latter one is also labelled as a peridynamic model. For infinite media, and for sufficiently smooth displacement fields, both integral nonlocal models can be equivalent, assuming some kernel correspondence rules. For infinite media (or finite media with extended reflection rules), it is also shown that Eringen's differential model can be reformulated into a consistent strain-based integral nonlocal model with exponential kernel, or into a relative displacement-based integral nonlocal model with a modified exponential kernel. A finite bar in uniform tension is considered as a paradigmatic static case. The strain-based nonlocal behaviour of this bar in tension is analyzed for different kernels available in the literature. It is shown that the kernel has to fulfil some normalization and end compatibility conditions in order to preserve the uniform strain field associated with this homogeneous stress state. Such a kernel can be built by combining a local and a nonlocal strain measure with compatible boundary conditions, or by extending the domain outside its finite size while preserving some kinematic compatibility conditions. The same results are shown for the nonlocal peridynamic bar where a homogeneous strain field is also analytically obtained in the elastic bar for consistent compatible kinematic boundary conditions at the vicinity of the end conditions. The results are extended to the vibration of a fixed-fixed finite bar where the natural frequencies are calculated for both the strain-based and the peridynamic models.

  11. Computational investigation of intense short-wavelength laser interaction with rare gas clusters

    NASA Astrophysics Data System (ADS)

    Bigaouette, Nicolas

    Current Very High Temperature Reactor designs incorporate TRi-structural ISOtropic (TRISO) particle fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel by dropping a cold precursor solution into a column of hot trichloroethylene (TCE). The temperature difference drives the liquid precursor solution to precipitate the metal solution into gel spheres before reaching the bottom of a production column. Over time, gelation byproducts inhibit complete gelation and the TCE must be purified or discarded. The resulting mixed-waste stream is expensive to dispose of or recycle, and changing the forming fluid to a non-hazardous alternative could greatly improve the economics of kernel production. Selection criteria for a replacement forming fluid narrowed a list of ~10,800 chemicals to yield ten potential replacements. The physical properties of the alternatives were measured as a function of temperature between 25 °C and 80 °C. Calculated terminal velocities and heat transfer rates provided an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane were selected for further testing, and surrogate yttria-stabilized zirconia (YSZ) kernels were produced using these selected fluids. The kernels were characterized for density, geometry, composition, and crystallinity and compared to a control group of kernels produced in silicone oil. Production in 1-bromotetradecane showed positive results, producing dense (93.8 %TD) and spherical (1.03 aspect ratio) kernels, but proper gelation did not occur in the other alternative forming fluids. With many of the YSZ kernels not properly gelling within the length of the column, this project further investigated the heat transfer properties of the forming fluids and precursor solution. A sensitivity study revealed that the heat transfer properties of the precursor solution have the strongest impact on gelation time. A COMSOL heat transfer model estimated an effective thermal diffusivity range for the YSZ precursor solution as 1.13x10 -8 m2/s to 3.35x10-8 m 2/s, which is an order of magnitude smaller than the value used in previous studies. 1-bromotetradecane is recommended for further investigation with the production of uranium-based kernels.

  12. Polarizable atomistic calculation of site energy disorder in amorphous Alq3.

    PubMed

    Nagata, Yuki

    2010-02-01

    A polarizable molecular dynamics simulation and calculation scheme for site energy disorder is presented in amorphous tris(8-hydroxyquinolinato)aluminum (Alq(3)) by means of the charge response kernel (CRK) method. The CRK fit to the electrostatic potential and the tight-binding approximation are introduced, which enables modeling of the polarizable electrostatic interaction for a large molecule systematically from an ab initio calculation. The site energy disorder for electron and hole transfers is calculated in amorphous Alq(3) and the effect of the polarization on the site energy disorder is discussed.

  13. Upgrade to iterative image reconstruction (IR) in abdominal MDCT imaging: a clinical study for detailed parameter optimization beyond vendor recommendations using the adaptive statistical iterative reconstruction environment (ASIR).

    PubMed

    Mueck, F G; Körner, M; Scherr, M K; Geyer, L L; Deak, Z; Linsenmaier, U; Reiser, M; Wirth, S

    2012-03-01

    To compare the image quality of dose-reduced 64-row abdominal CT reconstructed at different levels of adaptive statistical iterative reconstruction (ASIR) to full-dose baseline examinations reconstructed with filtered back-projection (FBP) in a clinical setting and upgrade situation. Abdominal baseline examinations (noise index NI = 29; LightSpeed VCT XT, GE) were intra-individually compared to follow-up studies on a CT with an ASIR option (NI = 43; Discovery HD750, GE), n = 42. Standard-kernel images were calculated with ASIR blendings of 0 - 100 % in slice and volume mode, respectively. Three experienced radiologists compared the image quality of these 567 sets to their corresponding full-dose baseline examination (- 2: diagnostically inferior, - 1: inferior, 0: equal, + 1: superior, + 2: diagnostically superior). Furthermore, a phantom was scanned. Statistical analysis used the Wilcoxon - the Mann-Whitney U-test and the intra-class correlation (ICC). The mean CTDIvol decreased from 19.7 ± 5.5 to 12.2 ± 4.7 mGy (p < 0.001). The ICC was 0.861. The total image quality of the dose-reduced ASIR studies was comparable to the baseline at ASIR 50 % in slice (p = 0.18) and ASIR 50 - 100 % in volume mode (p > 0.10). Volume mode performed 73 % slower than slice mode (p < 0.01). After the system upgrade, the vendor recommendation of ASIR 50 % in slice mode allowed for a dose reduction of 38 % in abdominal CT with comparable image quality and time expenditure. However, there is still further dose reduction potential for more complex reconstruction settings. © Georg Thieme Verlag KG Stuttgart · New York.

  14. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  15. Independent genetic control of maize (Zea mays L.) kernel weight determination and its phenotypic plasticity.

    PubMed

    Alvarez Prado, Santiago; Sadras, Víctor O; Borrás, Lucas

    2014-08-01

    Maize kernel weight (KW) is associated with the duration of the grain-filling period (GFD) and the rate of kernel biomass accumulation (KGR). It is also related to the dynamics of water and hence is physiologically linked to the maximum kernel water content (MWC), kernel desiccation rate (KDR), and moisture concentration at physiological maturity (MCPM). This work proposed that principles of phenotypic plasticity can help to consolidated the understanding of the environmental modulation and genetic control of these traits. For that purpose, a maize population of 245 recombinant inbred lines (RILs) was grown under different environmental conditions. Trait plasticity was calculated as the ratio of the variance of each RIL to the overall phenotypic variance of the population of RILs. This work found a hierarchy of plasticities: KDR ≈ GFD > MCPM > KGR > KW > MWC. There was no phenotypic and genetic correlation between traits per se and trait plasticities. MWC, the trait with the lowest plasticity, was the exception because common quantitative trait loci were found for the trait and its plasticity. Independent genetic control of a trait per se and genetic control of its plasticity is a condition for the independent evolution of traits and their plasticities. This allows breeders potentially to select for high or low plasticity in combination with high or low values of economically relevant traits. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  16. Norm overlap between many-body states: Uncorrelated overlap between arbitrary Bogoliubov product states

    NASA Astrophysics Data System (ADS)

    Bally, B.; Duguet, T.

    2018-02-01

    Background: State-of-the-art multi-reference energy density functional calculations require the computation of norm overlaps between different Bogoliubov quasiparticle many-body states. It is only recently that the efficient and unambiguous calculation of such norm kernels has become available under the form of Pfaffians [L. M. Robledo, Phys. Rev. C 79, 021302 (2009), 10.1103/PhysRevC.79.021302]. Recently developed particle-number-restored Bogoliubov coupled-cluster (PNR-BCC) and particle-number-restored Bogoliubov many-body perturbation (PNR-BMBPT) ab initio theories [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] make use of generalized norm kernels incorporating explicit many-body correlations. In PNR-BCC and PNR-BMBPT, the Bogoliubov states involved in the norm kernels differ specifically via a global gauge rotation. Purpose: The goal of this work is threefold. We wish (i) to propose and implement an alternative to the Pfaffian method to compute unambiguously the norm overlap between arbitrary Bogoliubov quasiparticle states, (ii) to extend the first point to explicitly correlated norm kernels, and (iii) to scrutinize the analytical content of the correlated norm kernels employed in PNR-BMBPT. Point (i) constitutes the purpose of the present paper while points (ii) and (iii) are addressed in a forthcoming paper. Methods: We generalize the method used in another work [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] in such a way that it is applicable to kernels involving arbitrary pairs of Bogoliubov states. The formalism is presently explicated in detail in the case of the uncorrelated overlap between arbitrary Bogoliubov states. The power of the method is numerically illustrated and benchmarked against known results on the basis of toy models of increasing complexity. Results: The norm overlap between arbitrary Bogoliubov product states is obtained under a closed-form expression allowing its computation without any phase ambiguity. The formula is physically intuitive, accurate, and versatile. It equally applies to norm overlaps between Bogoliubov states of even or odd number parity. Numerical applications illustrate these features and provide a transparent representation of the content of the norm overlaps. Conclusions: The complex norm overlap between arbitrary Bogoliubov states is computed, without any phase ambiguity, via elementary linear algebra operations. The method can be used in any configuration mixing of orthogonal and non-orthogonal product states. Furthermore, the closed-form expression extends naturally to correlated overlaps at play in PNR-BCC and PNR-BMBPT. As such, the straight overlap between Bogoliubov states is the zero-order reduction of more involved norm kernels to be studied in a forthcoming paper.

  17. Task-driven imaging in cone-beam computed tomography.

    PubMed

    Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H

    Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.

  18. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less

  19. Leptokurtic portfolio theory

    NASA Astrophysics Data System (ADS)

    Kitt, R.; Kalda, J.

    2006-03-01

    The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.

  20. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations.

    PubMed

    Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D

    2016-03-21

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  1. Multilevel summation with B-spline interpolation for pairwise interactions in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.

    2016-03-01

    The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.

  2. TU-G-204-05: The Effects of CT Acquisition and Reconstruction Conditions On Computed Texture Feature Values of Lung Lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, P; Young, S; Kim, G

    2015-06-15

    Purpose: Texture features have been investigated as a biomarker of response and malignancy. Because these features reflect local differences in density, they may be influenced by acquisition and reconstruction parameters. The purpose of this study was to investigate the effects of radiation dose level and reconstruction method on features derived from lung lesions. Methods: With IRB approval, 33 lung tumor cases were identified from clinically indicated thoracic CT scans in which the raw projection (sinogram) data were available. Based on a previously-published technique, noise was added to the raw data to simulate reduced-dose versions of each case at 25%, 10%more » and 3% of the original dose. Original and simulated reduced dose projection data were reconstructed with conventional and two iterative-reconstruction settings, yielding 12 combinations of dose/recon conditions. One lesion from each case was contoured. At the reference condition (full dose, conventional recon), 17 lesions were randomly selected for repeat contouring (repeatability). For each lesion at each dose/recon condition, 151 texture measures were calculated. A paired differences approach was employed to compare feature variation from repeat contours at the reference condition to the variation observed in other dose/recon conditions (reproducibility). The ratio of standard deviation of the reproducibility to repeatability was used as the variation measure for each feature. Results: The mean variation (standard deviation) across dose levels and kernel was significantly different with a ratio of 2.24 (±5.85) across texture features (p=0.01). The mean variation (standard deviation) across dose levels with conventional recon was also significantly different with 2.30 (7.11) (p=0.025). The mean variation across reconstruction settings of original dose has a trend in showing difference with 1.35 (2.60) among all features (p=0.09). Conclusion: Texture features varied considerably with variations in dose and reconstruction condition. Care should be taken to standardize these conditions when using texture as a quantitative feature. This effort supported in part by a grant from the National Cancer Institute’s Quantitative Imaging Network (QIN): U01 CA181156; The UCLA Department of Radiology has a Master Research Agreement with Siemens Healthcare; Dr. McNitt-Gray has previously received research support from Siemens Healthcare.« less

  3. Antiasthmatic activity of Moringa oleifera Lam: A clinical study

    PubMed Central

    Agrawal, Babita; Mehta, Anita

    2008-01-01

    The present study was carried out to investigate the efficacy and safety of seed kernels of Moringa oleifera in the treatment of bronchial asthma. Twenty patients of either sex with mild-to-moderate asthma were given finely powdered dried seed kernels in dose of 3 g for 3 weeks. The clinical efficacy with respect to symptoms and respiratory functions were assessed using a spirometer prior to and at the end of the treatment. Hematological parameters were not changed markedly by treatment with M. oleifera. However, the majority of patients showed a significant increase in hemoglobin (Hb) values and Erythrocyte sedimentation rate (ESR) was significantly reduced. Significant improvement was also observed in symptom score and severity of asthmatic attacks. Treatment with the drug for 3 weeks produced significant improvement in forced vital capacity, forced expiratory volume in one second, and peak expiratory flow rate values by 32.97 ± 6.03%, 30.05 ± 8.12%, and 32.09 ± 11.75%, respectively, in asthmatic subjects. Improvement was also observed in % predicted values. None of the patients showed any adverse effects with M. oleifera. The results of the present study suggest the usefulness of M. oleifera seed kernel in patients of bronchial asthma. PMID:21264158

  4. Gamma irradiation of peanut kernel to control mold growth and to diminish aflatoxin contamination

    NASA Astrophysics Data System (ADS)

    Y.-Y. Chiou, R.

    1996-09-01

    Peanut kernel inoculated with Aspergillus parasiticus conidia were gamma irradiated with 0, 2.5, 5.0 and 10 kGy using Co60. Levels higher than 2.5 kGy were effective in retarding the outgrowth of A. parasiticus and reducing the population of natural mold contaminants. However, complete elimination of these molds was not achieved even at the dose of 10 kGy. After 4 wk incubation of the inoculated kernels in a humidified condition, aflatoxins produced by the surviving A. parasiticus were 69.12, 2.42, 57.36 and 22.28 μ/g, corresponding to the original irradiation levels. Peroxide content of peanut oils prepared from the irradiated peanuts increased with increased irradiation dosage. After storage, at each irradiation level, peroxide content in peanuts stored at -14°C was lower than that in peanuts stored at an ambient temperature. TBA values and CDHP contents of the oil increased with increased irradiation dosage and changed slightly after storage. However, fatty acid contents of the peanut oil varied in a limited range as affected by the irradiation dosage and storage temperature. The SDS-PAGE protein pattern of peanuts revealed no noticeable variation of protein subunits resulting from irradiation and storage.

  5. Simultaneous spectrophotometric determination of four metals by two kinds of partial least squares methods

    NASA Astrophysics Data System (ADS)

    Gao, Ling; Ren, Shouxin

    2005-10-01

    Simultaneous determination of Ni(II), Cd(II), Cu(II) and Zn(II) was studied by two methods, kernel partial least squares (KPLS) and wavelet packet transform partial least squares (WPTPLS), with xylenol orange and cetyltrimethyl ammonium bromide as reagents in the medium pH = 9.22 borax-hydrochloric acid buffer solution. Two programs, PKPLS and PWPTPLS, were designed to perform the calculations. Data reduction was performed using kernel matrices and wavelet packet transform, respectively. In the KPLS method, the size of the kernel matrix is only dependent on the number of samples, thus the method was suitable for the data matrix with many wavelengths and fewer samples. Wavelet packet representations of signals provide a local time-frequency description, thus in the wavelet packet domain, the quality of the noise removal can be improved. In the WPTPLS by optimization, wavelet function and decomposition level were selected as Daubeches 12 and 5, respectively. Experimental results showed both methods to be successful even where there was severe overlap of spectra.

  6. OPC modeling by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.

    2005-05-01

    Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.

  7. ASIC-based architecture for the real-time computation of 2D convolution with large kernel size

    NASA Astrophysics Data System (ADS)

    Shao, Rui; Zhong, Sheng; Yan, Luxin

    2015-12-01

    Bidimensional convolution is a low-level processing algorithm of interest in many areas, but its high computational cost constrains the size of the kernels, especially in real-time embedded systems. This paper presents a hardware architecture for the ASIC-based implementation of 2-D convolution with medium-large kernels. Aiming to improve the efficiency of storage resources on-chip, reducing off-chip bandwidth of these two issues, proposed construction of a data cache reuse. Multi-block SPRAM to cross cached images and the on-chip ping-pong operation takes full advantage of the data convolution calculation reuse, design a new ASIC data scheduling scheme and overall architecture. Experimental results show that the structure can achieve 40× 32 size of template real-time convolution operations, and improve the utilization of on-chip memory bandwidth and on-chip memory resources, the experimental results show that the structure satisfies the conditions to maximize data throughput output , reducing the need for off-chip memory bandwidth.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, M; Choi, E; Chuong, M

    Purpose: To evaluate weather the current radiobiological models can predict the normal liver complications of radioactive Yttrium-90 ({sup 90}Y) selective-internal-radiation-treatment (SIRT) for metastatic liver lesions based on the post-infusion {sup 90}Y PET images. Methods: A total of 20 patients with metastatic liver tumors treated with SIRT that received a post-infusion {sup 90}Y-PET/CT scan were analyzed in this work. The 3D activity distribution of the PET images was converted into a 3D dose distribution via a kernel convolution process. The physical dose distribution was converted into the equivalent dose (EQ2) delivered at 2 Gy based on the linear-quadratic (LQ) model consideringmore » the dose rate effect. The biological endpoint of this work was radiation-induce liver disease (RILD). The NTCPs were calculated with four different repair-times (T1/2-Liver-Repair= 0,0.5,1.0,2.0 hr) and three published NTCP models (Lyman-external-RT, Lyman 90Y-HCC-SIRT, parallel model) were compared to the incidence of RILD of the recruited patients to evaluate their ability of outcome prediction. Results: The mean normal liver physical dose (avg. 51.9 Gy, range 31.9–69.8 Gy) is higher than the suggested liver dose constraint for external beam treatment (∼30 Gy). However, none of the patients in our study developed RILD after the SIRT. The estimated probability of ‘no patient developing RILD’ obtained from the two Lyman models are 46.3% to 48.3% (T1/2-Liver-Repair= 0hr) and <1% for all other repair times. For the parallel model, the estimated probability is 97.3% (0hr), 51.7% (0.5hr), 2.0% (1.0hr) and <1% (2.0hr). Conclusion: Molecular-images providing the distribution of {sup 90}Y enable the dose-volume based dose/outcome analysis for SIRT. Current NTCP models fail to predict RILD complications in our patient population, unless a very short repair-time for the liver is assumed. The discrepancy between the Lyman {sup 90}Y-HCC-SIRT model predicted and the clinically observed outcomes further demonstrates the need of an NTCP model specific to the metastatic liver SIRT.« less

  9. Electronic polarization effect on low-frequency infrared and Raman spectra of aprotic solvent: Molecular dynamics simulation study with charge response kernel by second order Møller-Plesset perturbation method

    NASA Astrophysics Data System (ADS)

    Isegawa, Miho; Kato, Shigeki

    2007-12-01

    Low-frequency infrared (IR) and depolarized Raman scattering (DRS) spectra of acetonitrile, methylene chloride, and acetone liquids are simulated via molecular dynamics calculations with the charge response kernel (CRK) model obtained at the second order Møller-Plesset perturbation (MP2) level. For this purpose, the analytical second derivative technique for the MP2 energy is employed to evaluate the CRK matrices. The calculated IR spectra reasonably agree with the experiments. In particular, the agreement is excellent for acetone because the present CRK model well reproduces the experimental polarizability in the gas phase. The importance of interaction induced dipole moments in characterizing the spectral shapes is stressed. The DRS spectrum of acetone is mainly discussed because the experimental spectrum is available only for this molecule. The calculated spectrum is close to the experiment. The comparison of the present results with those by the multiple random telegraph model is also made. By decomposing the polarizability anisotropy time correlation function to the contributions from the permanent, induced polarizability and their cross term, a discrepancy from the previous calculations is observed in the sign of permanent-induce cross term contribution. The origin of this discrepancy is discussed by analyzing the correlation functions for acetonitrile.

  10. Topical Application of Apricot Kernel Extract Improves Dry Eye Symptoms in a Unilateral Exorbital Lacrimal Gland Excision Mouse

    PubMed Central

    Kim, Chan-Sik; Jo, Kyuhyung; Lee, Ik-Soo; Kim, Junghyun

    2016-01-01

    The purpose of this study was to investigate the therapeutic effects of topical application of apricot kernel extract (AKE) in a unilateral exorbital lacrimal gland excision mouse model of experimental dry eye. Dry eye was induced by surgical removal of the lacrimal gland. Eye drops containing 0.5 or 1 mg/mL AKE were administered twice a day from day 3 to day 7 after surgery. Tear fluid volume and corneal irregularity scores were determined. In addition, we examined the immunohistochemical expression level of Muc4. The topical administration of AKE dose-dependently improved all clinical dry eye symptoms by promoting the secretion of tear fluid and mucin. Thus, the results of this study indicate that AKE may be an efficacious topical agent for treating dry eye disease. PMID:27886047

  11. Topical Application of Apricot Kernel Extract Improves Dry Eye Symptoms in a Unilateral Exorbital Lacrimal Gland Excision Mouse.

    PubMed

    Kim, Chan-Sik; Jo, Kyuhyung; Lee, Ik-Soo; Kim, Junghyun

    2016-11-23

    The purpose of this study was to investigate the therapeutic effects of topical application of apricot kernel extract (AKE) in a unilateral exorbital lacrimal gland excision mouse model of experimental dry eye. Dry eye was induced by surgical removal of the lacrimal gland. Eye drops containing 0.5 or 1 mg/mL AKE were administered twice a day from day 3 to day 7 after surgery. Tear fluid volume and corneal irregularity scores were determined. In addition, we examined the immunohistochemical expression level of Muc4. The topical administration of AKE dose-dependently improved all clinical dry eye symptoms by promoting the secretion of tear fluid and mucin. Thus, the results of this study indicate that AKE may be an efficacious topical agent for treating dry eye disease.

  12. Building machine learning force fields for nanoclusters

    NASA Astrophysics Data System (ADS)

    Zeni, Claudio; Rossi, Kevin; Glielmo, Aldo; Fekete, Ádám; Gaston, Nicola; Baletto, Francesca; De Vita, Alessandro

    2018-06-01

    We assess Gaussian process (GP) regression as a technique to model interatomic forces in metal nanoclusters by analyzing the performance of 2-body, 3-body, and many-body kernel functions on a set of 19-atom Ni cluster structures. We find that 2-body GP kernels fail to provide faithful force estimates, despite succeeding in bulk Ni systems. However, both 3- and many-body kernels predict forces within an ˜0.1 eV/Å average error even for small training datasets and achieve high accuracy even on out-of-sample, high temperature structures. While training and testing on the same structure always provide satisfactory accuracy, cross-testing on dissimilar structures leads to higher prediction errors, posing an extrapolation problem. This can be cured using heterogeneous training on databases that contain more than one structure, which results in a good trade-off between versatility and overall accuracy. Starting from a 3-body kernel trained this way, we build an efficient non-parametric 3-body force field that allows accurate prediction of structural properties at finite temperatures, following a newly developed scheme [A. Glielmo et al., Phys. Rev. B 95, 214302 (2017)]. We use this to assess the thermal stability of Ni19 nanoclusters at a fractional cost of full ab initio calculations.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaise Collin

    The Idaho National Laboraroty (INL) PARFUME (particle fuel model) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along withmore » stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.« less

  14. The intratumoral distribution of radiolabeled 177Lu-BR96 monoclonal antibodies changes in relation to tumor histology over time in a syngeneic rat colon carcinoma model.

    PubMed

    Örbom, Anders; Eriksson, Sophie E; Elgström, Erika; Ohlsson, Tomas; Nilsson, Rune; Tennvall, Jan; Strand, Sven-Erik

    2013-08-01

    The therapeutic effect of radioimmunotherapy depends on the distribution of the absorbed dose in relation to viable cancer cells within the tumor, which in turn is a function of the activity distribution. The aim of this study was to investigate the distribution of (177)Lu-DOTA-BR96 monoclonal antibodies targeting the Lewis Y antigen over 7 d using a syngeneic rat model of colon carcinoma. Thirty-eight tumor-bearing rats were intravenously given 25 or 50 MBq of (177)Lu-DOTA-BR96 per kilogram of body weight and were sacrificed 2, 8, 24, 48, 72, 96, 120, or 168 h after injection, with activity measured in blood and tumor samples. Adjacent cryosections of each tumor were analyzed in 3 ways: imaging using a silicon-strip detector for digital autoradiography, staining for histologic characterization, or staining to determine the distribution of the antigen, vasculature, and proliferating cells using immunohistochemistry. Absorbed-dose rate distribution images at the moment of sacrifice were calculated using the activity distribution and a point-dose kernel. The correlations between antigen expression and both activity uptake and absorbed-dose rate were calculated for several regions of interest in each tumor. Nine additional animals with tumors were given unlabeled antibody to evaluate possible immunologic effects. At 2-8 h after injection, activity was found in the tumor margins; at 24 h, in viable antigen-expressing areas within the tumor; and at 48 h and later, increasingly in antigen-negative areas of granulation tissue. The correlation between antigen expression and both the mean activity and the absorbed-dose rate in regions of interest changed from positive to negative after 24 h after injection. Antigen-negative areas also increased over time in animals injected with unlabeled BR96, compared with untreated tumors. The results indicate that viable Lewis Y-expressing tumor cells are most efficiently treated during the initial uptake period. The activity then seems to remain in these initial uptake regions after the elimination of tumor cells and formation of granulation tissue. Further studies using these techniques could aid in determining the effects of the intratumoral activity distribution on overall therapeutic efficacy.

  15. Potential benefit of the CT adaptive statistical iterative reconstruction method for pediatric cardiac diagnosis

    NASA Astrophysics Data System (ADS)

    Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.

    2010-04-01

    Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.

  16. Learning molecular energies using localized graph kernels

    DOE PAGES

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    2017-03-21

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  17. Learning molecular energies using localized graph kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferré, Grégoire; Haut, Terry Scot; Barros, Kipton Marcos

    We report that recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturallymore » incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. Finally, we benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.« less

  18. NARMER-1: a photon point-kernel code with build-up factors

    NASA Astrophysics Data System (ADS)

    Visonneau, Thierry; Pangault, Laurence; Malouch, Fadhel; Malvagi, Fausto; Dolci, Florence

    2017-09-01

    This paper presents an overview of NARMER-1, the new generation of photon point-kernel code developed by the Reactor Studies and Applied Mathematics Unit (SERMA) at CEA Saclay Center. After a short introduction giving some history points and the current context of development of the code, the paper exposes the principles implemented in the calculation, the physical quantities computed and surveys the generic features: programming language, computer platforms, geometry package, sources description, etc. Moreover, specific and recent features are also detailed: exclusion sphere, tetrahedral meshes, parallel operations. Then some points about verification and validation are presented. Finally we present some tools that can help the user for operations like visualization and pre-treatment.

  19. Computational methods for reactive transport modeling: A Gibbs energy minimization approach for multiphase equilibrium calculations

    NASA Astrophysics Data System (ADS)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg

    2016-02-01

    We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.

  20. Evaluation of a semiautomated lung mass calculation technique for internal dosimetry applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busse, Nathan; Erwin, William; Pan, Tinsu

    2013-12-15

    Purpose: The authors sought to evaluate a simple, semiautomated lung mass estimation method using computed tomography (CT) scans obtained using a variety of acquisition techniques and reconstruction parameters for mass correction of medical internal radiation dose-based internal radionuclide radiation absorbed dose estimates.Methods: CT scans of 27 patients with lung cancer undergoing stereotactic body radiation therapy treatment planning with PET/CT were analyzed retrospectively. For each patient, free-breathing (FB) and respiratory-gated 4DCT scans were acquired. The 4DCT scans were sorted into ten respiratory phases, representing one complete respiratory cycle. An average CT reconstruction was derived from the ten-phase reconstructions. Mid expiration breath-holdmore » CT scans were acquired in the same session for many patients. Deep inspiration breath-hold diagnostic CT scans of many of the patients were obtained from different scanning sessions at similar time points to evaluate the effect of contrast administration and maximum inspiration breath-hold. Lung mass estimates were obtained using all CT scan types, and intercomparisons made to assess lung mass variation according to scan type. Lung mass estimates using the FB CT scans from PET/CT examinations of another group of ten male and ten female patients who were 21–30 years old and did not have lung disease were calculated and compared with reference lung mass values. To evaluate the effect of varying CT acquisition and reconstruction parameters on lung mass estimation, an anthropomorphic chest phantom was scanned and reconstructed with different CT parameters. CT images of the lungs were segmented using the OsiriX MD software program with a seed point of about −850 HU and an interval of 1000. Lung volume, and mean lung, tissue, and air HUs were recorded for each scan. Lung mass was calculated by assuming each voxel was a linear combination of only air and tissue. The specific gravity of lung volume was calculated using the formula (lung HU − air HU)/(tissue HU − air HU), and mass = specific gravity × total volume × 1.04 g/cm{sup 3}.Results: The range of calculated lung masses was 0.51–1.29 kg. The average male and female lung masses during FB CT were 0.80 and 0.71 kg, respectively. The calculated lung mass varied across the respiratory cycle but changed to a lesser degree than did lung volume measurements (7.3% versus 15.4%). Lung masses calculated using deep inspiration breath-hold and average CT were significantly larger (p < 0.05) than were some masses calculated using respiratory-phase and FB CT. Increased voxel size and smooth reconstruction kernels led to high lung mass estimates owing to partial volume effects.Conclusions: Organ mass correction is an important component of patient-specific internal radionuclide dosimetry. Lung mass calculation necessitates scan-based density correction to account for volume changes owing to respiration. The range of lung masses in the authors’ patient population represents lung doses for the same absorbed energy differing from 25% below to 64% above the dose found using reference phantom organ masses. With proper management of acquisition parameters and selection of FB or midexpiration breath hold scans, lung mass estimates with about 10% population precision may be achieved.« less

  1. Modeling silicon diode energy response factors for use in therapeutic photon beams.

    PubMed

    Eklund, Karin; Ahnesjö, Anders

    2009-10-21

    Silicon diodes have good spatial resolution, which makes them advantageous over ionization chambers for dosimetry in fields with high dose gradients. However, silicon diodes overrespond to low-energy photons, that are more abundant in scatter which increase with large fields and larger depths. We present a cavity-theory-based model for a general response function for silicon detectors at arbitrary positions within photon fields. The model uses photon and electron spectra calculated from fluence pencil kernels. The incident photons are treated according to their energy through a bipartition of the primary beam photon spectrum into low- and high-energy components. Primary electrons from the high-energy component are treated according to Spencer-Attix cavity theory. Low-energy primary photons together with all scattered photons are treated according to large cavity theory supplemented with an energy-dependent factor K(E) to compensate for energy variations in the electron equilibrium. The depth variation of the response for an unshielded silicon detector has been calculated for 5 x 5 cm(2), 10 x 10 cm(2) and 20 x 20 cm(2) fields in 6 and 15 MV beams and compared with measurements showing that our model calculates response factors with deviations less than 0.6%. An alternative method is also proposed, where we show that one can use a correlation with the scatter factor to determine the detector response of silicon diodes with an error of less than 3% in 6 MV and 15 MV photon beams.

  2. Zika virus disease, microcephaly and Guillain-Barré syndrome in Colombia: epidemiological situation during 21 months of the Zika virus outbreak, 2015-2017.

    PubMed

    Méndez, Nelson; Oviedo-Pastrana, Misael; Mattar, Salim; Caicedo-Castro, Isaac; Arrieta, German

    2017-01-01

    The Zika virus disease (ZVD) has had a huge impact on public health in Colombia for the numbers of people affected and the presentation of Guillain-Barre syndrome (GBS) and microcephaly cases associated to ZVD. A retrospective descriptive study was carried out, we analyze the epidemiological situation of ZVD and its association with microcephaly and GBS during a 21-month period, from October 2015 to June 2017. The variables studied were: (i) ZVD cases, (ii) ZVD cases in pregnant women, (iii) laboratory-confirmed ZVD in pregnant women, (iv) ZVD cases associated with microcephaly, (v) laboratory-confirmed ZVD associated with microcephaly, and (vi) ZVD associated to GBS cases. Average number of cases, attack rates (AR) and proportions were also calculated. The studied variables were plotted by epidemiological weeks and months. The distribution of ZVD cases in Colombia was mapped across the time using Kernel density estimator and QGIS software; we adopted Kernel Ridge Regression (KRR) and the Gaussian Kernel to estimate the number of Guillain Barre cases given the number of ZVD cases. One hundred eight thousand eighty-seven ZVD cases had been reported in Colombia, including 19,963 (18.5%) in pregnant women, 710 (0.66%) associated with microcephaly (AR, 4.87 cases per 10,000 live births) and 453 (0.42%) ZVD associated to GBS cases (AR, 41.9 GBS cases per 10,000 ZVD cases). It appears the cases of GBS increased in parallel with the cases of ZVD, cases of microcephaly appeared 5 months after recognition of the outbreak. The kernel density map shows that throughout the study period, the states most affected by the Zika outbreak in Colombia were mainly San Andrés and Providencia islands, Casanare, Norte de Santander, Arauca and Huila. The KRR shows that there is no proportional relationship between the number of GBS and ZVD cases. During the cross validation, the RMSE achieved for the second order polynomial kernel, the linear kernel, the sigmoid kernel, and the Gaussian kernel are 9.15, 9.2, 10.7, and 7.2 respectively. This study updates the epidemiological analysis of the ZVD situation in Colombia describes the geographical distribution of ZVD and shows the functional relationship between ZVD cases and GBS.

  3. Evaluation of the adsorption potential of eco-friendly activated carbon prepared from cherry kernels for the removal of Pb2+, Cd2+ and Ni2+ from aqueous wastes.

    PubMed

    Pap, Sabolč; Radonić, Jelena; Trifunović, Snežana; Adamović, Dragan; Mihajlović, Ivana; Vojinović Miloradov, Mirjana; Turk Sekulić, Maja

    2016-12-15

    Development, characterization and evaluation of the efficiency of cost-effective medium for the removal of Pb 2+ , Cd 2+ and Ni 2+ from aqueous systems, as a novel, eco-friendly solution for wastewater remediation were done. The precursors for low-cost adsorbent were lignocellulosic raw materials (sweet/sour cherry kernels), as industrial byproducts and components of organic solid waste. Activated carbon synthesis was carried out by thermochemical conversion (H 3 PO 4 , 500 °C) in the complete absence of inert atmosphere. Characterization of the activated carbon was performed by elemental analysis, FTIR, SEM, EDX and BET. BET surface area corresponds to 657.1 m 2  g -1 . The evaluation also included the influence of pH, contact time, solute concentration and adsorbent dose on the separation efficiency in the batch operational mode. The equilibrium and kinetic studies of adsorption were done. The maximum adsorption capacity of the activated carbon for Cd 2+ ions was calculated from the Langmuir isotherm and found to be 198.7 mg g -1 . Adsorption of Pb 2+ and Ni 2+ were better suitable to Freundlich model with the maximum adsorption capacity of 180.3 mg g -1 and 76.27 mg g -1 , respectively. The results indicate that the pseudo-second-order model best describes adsorption kinetic data. Based on desorption study results, activated carbon was successfully regenerated with HNO 3 for 3 cycles. In order to provide the results for basic cost-effective analysis, competing ion-effects in a real sample have been evaluated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Parameterized Micro-benchmarking: An Auto-tuning Approach for Complex Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wenjing; Krishnamoorthy, Sriram; Agrawal, Gagan

    2012-05-15

    Auto-tuning has emerged as an important practical method for creating highly optimized implementations of key computational kernels and applications. However, the growing complexity of architectures and applications is creating new challenges for auto-tuning. Complex applications can involve a prohibitively large search space that precludes empirical auto-tuning. Similarly, architectures are becoming increasingly complicated, making it hard to model performance. In this paper, we focus on the challenge to auto-tuning presented by applications with a large number of kernels and kernel instantiations. While these kernels may share a somewhat similar pattern, they differ considerably in problem sizes and the exact computation performed.more » We propose and evaluate a new approach to auto-tuning which we refer to as parameterized micro-benchmarking. It is an alternative to the two existing classes of approaches to auto-tuning: analytical model-based and empirical search-based. Particularly, we argue that the former may not be able to capture all the architectural features that impact performance, whereas the latter might be too expensive for an application that has several different kernels. In our approach, different expressions in the application, different possible implementations of each expression, and the key architectural features, are used to derive a simple micro-benchmark and a small parameter space. This allows us to learn the most significant features of the architecture that can impact the choice of implementation for each kernel. We have evaluated our approach in the context of GPU implementations of tensor contraction expressions encountered in excited state calculations in quantum chemistry. We have focused on two aspects of GPUs that affect tensor contraction execution: memory access patterns and kernel consolidation. Using our parameterized micro-benchmarking approach, we obtain a speedup of up to 2 over the version that used default optimizations, but no auto-tuning. We demonstrate that observations made from microbenchmarks match the behavior seen from real expressions. In the process, we make important observations about the memory hierarchy of two of the most recent NVIDIA GPUs, which can be used in other optimization frameworks as well.« less

  5. Exact calculation of the time convolutionless master equation generator: Application to the nonequilibrium resonant level model

    NASA Astrophysics Data System (ADS)

    Kidon, Lyran; Wilner, Eli Y.; Rabani, Eran

    2015-12-01

    The generalized quantum master equation provides a powerful tool to describe the dynamics in quantum impurity models driven away from equilibrium. Two complementary approaches, one based on Nakajima-Zwanzig-Mori time-convolution (TC) and the other on the Tokuyama-Mori time-convolutionless (TCL) formulations provide a starting point to describe the time-evolution of the reduced density matrix. A key in both approaches is to obtain the so called "memory kernel" or "generator," going beyond second or fourth order perturbation techniques. While numerically converged techniques are available for the TC memory kernel, the canonical approach to obtain the TCL generator is based on inverting a super-operator in the full Hilbert space, which is difficult to perform and thus, nearly all applications of the TCL approach rely on a perturbative scheme of some sort. Here, the TCL generator is expressed using a reduced system propagator which can be obtained from system observables alone and requires the calculation of super-operators and their inverse in the reduced Hilbert space rather than the full one. This makes the formulation amenable to quantum impurity solvers or to diagrammatic techniques, such as the nonequilibrium Green's function. We implement the TCL approach for the resonant level model driven away from equilibrium and compare the time scales for the decay of the generator with that of the memory kernel in the TC approach. Furthermore, the effects of temperature, source-drain bias, and gate potential on the TCL/TC generators are discussed.

  6. Variation of biometric parameters in corn cobs under the influence of nitrogen fertilization

    NASA Astrophysics Data System (ADS)

    Gigel, Prisecaru; Florin, Sala

    2017-07-01

    Biometric parameters as elements of productivity on corn cobs, along with plant density per unit area (ha) are essential in achieving production. The influence of differentiated fertilization with nitrogen was evaluated at the level of productivity elements on corn cobs, Andreea hybrid. Biometric parameters of the corn cobs (total length - L; usable length - l; uncoated length with corn kernels - lu; diameter at the base - Db, middle - Dm, and top of the corn cobs - Dt; corn cob weight - Cw, grain weight - Gw) were directly influenced by the doses of nitrogen. Regression analysis has facilitated the prediction of grain weight as the main element of productivity under different statistical certainty based on nitrogen doses (R2 = 0.962, p<0.01), on the total length of corn cobs (R2 = 0.985, p<0.01), on the usable length of corn cobs (R2 = 0.996, p<<0.001), on the diameter at the base of corn cobs (R2 = 0.824, p<0.01), on the diameter at the middle of corn cobs (R2 = 0.807, p<0.01), on uncoated length with corn kernels (R2 = 0.624, p<0.01) and on the diameter at the top of the corn cobs (R2 = 0.384, p=0.015).

  7. Tracking diffusion of conditioning water in single wheat kernels of different hardnesses by near infrared hyperspectral imaging.

    PubMed

    Manley, Marena; du Toit, Gerida; Geladi, Paul

    2011-02-07

    The combination of near infrared (NIR) hyperspectral imaging and chemometrics was used to follow the diffusion of conditioning water over time in wheat kernels of different hardnesses. Conditioning was attempted with deionised water (dH(2)O) and deuterium oxide (D(2)O). The images were recorded at different conditioning times (0-36 h) from 1000 to 2498 nm with a line scan imaging system. After multivariate cleaning and spectral pre-processing (either multiplicative scatter correction or standard normal variate and Savitzky-Golay smoothing) six principal components (PCs) were calculated. These were studied visually interactively as score images and score plots. As no clear clusters were present in the score plots, changes in the score plots were investigated by means of classification gradients made within the respective PCs. Classes were selected in the direction of a PC (from positive to negative or negative to positive score values) in almost equal segments. Subsequently loading line plots were used to provide a spectroscopic explanation of the classification gradients. It was shown that the first PC explained kernel curvature. PC3 was shown to be related to a moisture-starch contrast and could explain the progress of water uptake. The positive influence of protein was also observed. The behaviour of soft, hard and very hard kernels was different in this respect, with the uptake of water observed much earlier in the soft kernels than in the harder ones. The harder kernels also showed a stronger influence of protein in the loading line plots. Difference spectra showed interpretable changes over time for water but not for D(2)O which had a too low signal in the wavelength range used. NIR hyperspectral imaging together with exploratory chemometrics, as detailed in this paper, may have wider applications than merely conditioning studies. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. MO-G-17A-05: PET Image Deblurring Using Adaptive Dictionary Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valiollahzadeh, S; Clark, J; Mawlawi, O

    2014-06-15

    Purpose: The aim of this work is to deblur PET images while suppressing Poisson noise effects using adaptive dictionary learning (DL) techniques. Methods: The model that relates a blurred and noisy PET image to the desired image is described as a linear transform y=Hm+n where m is the desired image, H is a blur kernel, n is Poisson noise and y is the blurred image. The approach we follow to recover m involves the sparse representation of y over a learned dictionary, since the image has lots of repeated patterns, edges, textures and smooth regions. The recovery is based onmore » an optimization of a cost function having four major terms: adaptive dictionary learning term, sparsity term, regularization term, and MLEM Poisson noise estimation term. The optimization is solved by a variable splitting method that introduces additional variables. We simulated a 128×128 Hoffman brain PET image (baseline) with varying kernel types and sizes (Gaussian 9×9, σ=5.4mm; Uniform 5×5, σ=2.9mm) with additive Poisson noise (Blurred). Image recovery was performed once when the kernel type was included in the model optimization and once with the model blinded to kernel type. The recovered image was compared to the baseline as well as another recovery algorithm PIDSPLIT+ (Setzer et. al.) by calculating PSNR (Peak SNR) and normalized average differences in pixel intensities (NADPI) of line profiles across the images. Results: For known kernel types, the PSNR of the Gaussian (Uniform) was 28.73 (25.1) and 25.18 (23.4) for DL and PIDSPLIT+ respectively. For blinded deblurring the PSNRs were 25.32 and 22.86 for DL and PIDSPLIT+ respectively. NADPI between baseline and DL, and baseline and blurred for the Gaussian kernel was 2.5 and 10.8 respectively. Conclusion: PET image deblurring using dictionary learning seems to be a good approach to restore image resolution in presence of Poisson noise. GE Health Care.« less

  9. Implementation of radiation shielding calculation methods. Volume 2: Seminar/Workshop notes

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    Detailed descriptions are presented of the input data for each of the MSFC computer codes applied to the analysis of a realistic nuclear propelled vehicle. The analytical techniques employed include cross section data, preparation, one and two dimensional discrete ordinates transport, point kernel, and single scatter methods.

  10. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  11. WE-EF-BRA-07: High Performance Preclinical Irradiation Through Optimized Dual Focal Spot Dose Painting and Online Virtual Isocenter Radiation Field Targeting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, J; Princess Margaret Cancer Centre, University Health Network, Toronto, CA; Lindsay, P

    Purpose: Advances in radiotherapy practice facilitated by collimation systems to shape radiation fields and image guidance to target these conformal beams have motivated proposals for more complex dose patterns to improve the therapeutic ratio. Recent progress in small animal radiotherapy platforms has provided the foundation to validate the efficacy of such interventions, but robustly delivering heterogeneous dose distributions at the scale and accuracy demanded by preclinical studies remains challenging. This work proposes a dual focal spot optimization method to paint spatially heterogeneous dose regions and an online virtual isocenter targeting method to accurately target the dose distributions. Methods: Two-dimensional dosemore » kernels were empirically measured for the 1 mm diameter circular collimator with radiochromic film in a solid water phantom for the small and large x-ray focal spots on the X-RAD 225Cx microirradiator. These kernels were used in an optimization framework which determined a set of animal stage positions, beam-on times, and focal spot settings to optimally deliver a given desired dose distribution. An online method was developed which defined a virtual treatment isocenter based on a single image projection of the collimated radiation field. The method was demonstrated by optimization of a 6 mm circular 2 Gy target adjoining a 4 mm semicircular avoidance region. Results: The dual focal spot technique improved the optimized dose distribution with the proportion of avoidance region receiving more than 0.5 Gy reduced by 40% compared to the large focal spot technique. Targeting tests performed by irradiating ball bearing targets on radiochromic film pieced revealed the online targeting method improved the three-dimensional accuracy from 0.48 mm to 0.15 mm. Conclusion: The dual focal spot optimization and online virtual isocenter targeting framework is a robust option for delivering dose at the preclinical level and provides a new experimental option for unique radiobiological investigations This work is supported, in part, by the Natural Sciences and Engineering Research Council of Canada and a Mitacs-Accelerate fellowship. P.E. Lindsay, and D.A. Jaffray are listed as inventors of the system described herein. This system has been licensed to Precision X-Ray Inc. for commercial development.« less

  12. Modeling and analysis of UN TRISO fuel for LWR application using the PARFUME code

    NASA Astrophysics Data System (ADS)

    Collin, Blaise P.

    2014-08-01

    The Idaho National Laboratory (INL) PARFUME (PARticle FUel ModEl) code was used to assess the overall fuel performance of uranium nitride (UN) tristructural isotropic (TRISO) ceramic fuel under irradiation conditions typical of a Light Water Reactor (LWR). The dimensional changes of the fuel particle layers and kernel were calculated, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated depending on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties have large uncertainties at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, a large experimental effort would be needed to establish material properties, including kernel and PyC swelling rates, under these conditions before definitive conclusions can be drawn on the behavior of UN TRISO fuel in LWRs.

  13. Time-frequency Features for Impedance Cardiography Signals During Anesthesia Using Different Distribution Kernels.

    PubMed

    Muñoz, Jesús Escrivá; Gambús, Pedro; Jensen, Erik W; Vallverdú, Montserrat

    2018-01-01

    This works investigates the time-frequency content of impedance cardiography signals during a propofol-remifentanil anesthesia. In the last years, impedance cardiography (ICG) is a technique which has gained much attention. However, ICG signals need further investigation. Time-Frequency Distributions (TFDs) with 5 different kernels are used in order to analyze impedance cardiography signals (ICG) before the start of the anesthesia and after the loss of consciousness. In total, ICG signals from one hundred and thirty-one consecutive patients undergoing major surgery under general anesthesia were analyzed. Several features were extracted from the calculated TFDs in order to characterize the time-frequency content of the ICG signals. Differences between those features before and after the loss of consciousness were studied. The Extended Modified Beta Distribution (EMBD) was the kernel for which most features shows statistically significant changes between before and after the loss of consciousness. Among all analyzed features, those based on entropy showed a sensibility, specificity and area under the curve of the receiver operating characteristic above 60%. The anesthetic state of the patient is reflected on linear and non-linear features extracted from the TFDs of the ICG signals. Especially, the EMBD is a suitable kernel for the analysis of ICG signals and offers a great range of features which change according to the patient's anesthesia state in a statistically significant way. Schattauer GmbH.

  14. Investigations of Reactive Processes at Temperatures Relevant to the Hypersonic Flight Regime

    DTIC Science & Technology

    2014-10-31

    molecule is constructed based on high- level ab-initio calculations and interpolated using the reproducible kernel Hilbert space (RKHS) method and...a potential energy surface (PES) for the ground state of the NO2 molecule is constructed based on high- level ab initio calculations and interpolated...between O(3P) and NO(2Π) at higher temperatures relevant to the hypersonic flight regime of reentering space- crafts. At a more fundamental level , we

  15. Data-based diffraction kernels for surface waves from convolution and correlation processes through active seismic interferometry

    NASA Astrophysics Data System (ADS)

    Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc

    2018-05-01

    We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.

  16. Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong

    2017-05-01

    Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.

  17. Energy modulated electron therapy: Design, implementation, and evaluation of a novel method of treatment planning and delivery

    NASA Astrophysics Data System (ADS)

    Al-Yahya, Khalid

    Energy modulated electron therapy (EMET) is a promising treatment modality that has the fundamental capabilities to enhance the treatment planning and delivery of superficially located targets. Although it offers advantages over x-ray intensity modulated radiation therapy (IMRT), EMET has not been widely implemented to the same level of accuracy, automation, and clinical routine as its x-ray counterpart. This lack of implementation is attributed to the absence of a remotely automated beam shaping system as well as the deficiency in dosimetric accuracy of clinical electron pencil beam algorithms in the presence of beam modifiers and tissue heterogeneities. In this study, we present a novel technique for treatment planning and delivery of EMET. The delivery is achieved using a prototype of an automated "few leaf electron collimator" (FLEC). It consists of four copper leaves driven by stepper motors which are synchronized with the x-ray jaws in order to form a series of collimated rectangular openings or "fieldlets". Based on Monte Carlo studies, the FLEC has been designed to serve as an accessory tool to the current accelerator equipment. The FLEC was constructed and its operation was fully automated and integrated with the accelerator through an in-house assembled control unit. The control unit is a portable computer system accompanied with customized software that delivers EMET plans after acquiring them from the optimization station. EMET plans are produced based on dose volume constraints that employ Monte Carlo pre-generated and patient-specific kernels which are utilized by an in-house developed optimization algorithm. The structure of the optimization software is demonstrated. Using Monte Carlo techniques to calculate dose allows for accurate modeling of the collimation system as well as the patient heterogeneous geometry and take into account their impact on optimization. The Monte Carlo calculations were validated by comparing them against output measurements with an ionization chamber. Comparisons with measurements using nearly energy-independent radiochromic films were performed to confirm the Monte Carlo calculation accuracy for 1-D and 2-D dose distributions. We investigated the clinical significance of EMET on cancer sites that are inherently difficult to plan with IMRT. Several parameters were used to analyze treatment plans where they show that EMET provides significant overall improvements over IMRT.

  18. Design, development and validation of software for modelling dietary exposure to food chemicals and nutrients.

    PubMed

    McNamara, C; Naddy, B; Rohan, D; Sexton, J

    2003-10-01

    The Monte Carlo computational system for stochastic modelling of dietary exposure to food chemicals and nutrients is presented. This system was developed through a European Commission-funded research project. It is accessible as a Web-based application service. The system allows and supports very significant complexity in the data sets used as the model input, but provides a simple, general purpose, linear kernel for model evaluation. Specific features of the system include the ability to enter (arbitrarily) complex mathematical or probabilistic expressions at each and every input data field, automatic bootstrapping on subjects and on subject food intake diaries, and custom kernels to apply brand information such as market share and loyalty to the calculation of food and chemical intake.

  19. ELECTRON PROBE MICROANALYSIS OF IRRADIATED AND 1600°C SAFETY-TESTED AGR-1 TRISO FUEL PARTICLES WITH LOW AND HIGH RETAINED 110MAG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wright, Karen E.; van Rooyen, Isabella J.

    2016-11-01

    AGR-1 fuel Compact 4-3-3 achieved 18.63% FIMA and was exposed subsequently to a safety test at 1600°C. Two particles, AGR1-433-003 and AGR1-433-007, with measured-to-calculated 110mAg inventories of <22% and 100%, respectively, were selected for comparative electron microprobe analysis to determine whether the distribution or abundance of fission products differed proximally and distally from the deformed kernel in AGR1-433-003, and how this compared to fission product distribution in AGR1-433-007. On the deformed side of AGR1-433-003, Xe, Cs, I, Eu, Sr, and Te concentrations in the kernel buffer interface near the protruded kernel were up to six times higher than on themore » opposite, non-deformed side. At the SiC-inner pyrolytic carbon (IPyC) interface proximal to the deformed kernel, Pd and Ag concentrations were 1.2 wt% and 0.04 wt% respectively, whereas on the SiC-IPyC interface distal from the kernel deformation those elements measured 0.4 and 0.01 wt%, respectively. Palladium and Ag concentrations at the SiC-IPyC interface of AGR1-433-007 were 2.05 and 0.05 wt.%, respectively. Rare earth element concentrations at the SiC-IPyC interface of AGR1-433-007 were a factor of ten higher than at the SiC-IPyC interfaces measured in particle AGR1-433-003. Palladium permeated the SiC layer of AGR1-433-007 and the non-deformed SiC layer of AGR1-433-003.« less

  20. Home range and space use patterns of flathead catfish during the summer-fall period in two Missouri streams

    USGS Publications Warehouse

    Vokoun, Jason C.; Rabeni, Charles F.

    2005-01-01

    Flathead catfish Pylodictis olivaris were radio-tracked in the Grand River and Cuivre River, Missouri, from late July until they moved to overwintering habitats in late October. Fish moved within a definable area, and although occasional long-distance movements occurred, the fish typically returned to the previously occupied area. Seasonal home range was calculated with the use of kernel density estimation, which can be interpreted as a probabilistic utilization distribution that documents the internal structure of the estimate by delineating portions of the range that was used a specified percentage of the time. A traditional linear range also was reported. Most flathead catfish (89%) had one 50% kernel-estimated core area, whereas 11% of the fish split their time between two core areas. Core areas were typically in the middle of the 90% kernel-estimated home range (58%), although several had core areas in upstream (26%) and downstream (16%) portions of the home range. Home-range size did not differ based on river, sex, or size and was highly variable among individuals. The median 95% kernel estimate was 1,085 m (range, 70– 69,090 m) for all fish. The median 50% kernel-estimated core area was 135 m (10–2,260 m). The median linear range was 3,510 m (150–50,400 m). Fish pairs with core areas in the same and neighboring pools had static joint space use values of up to 49% (area of intersection index), indicating substantial overlap and use of the same area. However, all fish pairs had low dynamic joint space use values (<0.07; coefficient of association), indicating that fish pairs were temporally segregated, rarely occurring in the same location at the same time.

  1. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  2. DICOM organ dose does not accurately represent calculated dose in mammography

    NASA Astrophysics Data System (ADS)

    Suleiman, Moayyad E.; Brennan, Patrick C.; McEntee, Mark F.

    2016-03-01

    This study aims to analyze the agreement between the mean glandular dose estimated by the mammography unit (organ dose) and mean glandular dose calculated using Dance et al published method (calculated dose). Anonymised digital mammograms from 50 BreastScreen NSW centers were downloaded and exposure information required for the calculation of dose was extracted from the DICOM header along with the organ dose estimated by the system. Data from quality assurance annual tests for the included centers were collected and used to calculate the mean glandular dose for each mammogram. Bland-Altman analysis and a two-tailed paired t-test were used to study the agreement between calculated and organ dose and the significance of any differences. A total of 27,869 dose points from 40 centers were included in the study, mean calculated dose and mean organ dose (+/- standard deviation) were 1.47 (+/-0.66) and 1.38 (+/-0.56) mGy respectively. A statistically significant 0.09 mGy bias (t = 69.25; p<0.0001) with 95% limits of agreement between calculated and organ doses ranging from -0.34 and 0.52 were shown by Bland-Altman analysis, which indicates a small yet highly significant difference between the two means. The use of organ dose for dose audits is done at the risk of over or underestimating the calculated dose, hence, further work is needed to identify the causal agents for differences between organ and calculated doses and to generate a correction factor for organ dose.

  3. Design considerations for a Space Station radiation shield for protection from both man-made and natural sources

    NASA Technical Reports Server (NTRS)

    Bolch, Wesley E.; Peddicord, K. Lee; Felsher, Harry; Smith, Simon

    1994-01-01

    This study was conducted to analyze scenarios involving the use of nuclear-power vehicles in the vicinity of a manned Space Station (SS) in low-earth-orbit (LEO) to quantify their radiological impact to the station crew. In limiting the radiant dose to crew members, mission planners may (1) shut the reactor down prior to reentry, (2) position the vehicle at a prescribed parking distance, and (3) deploy radiation shield about the shutdown reactor. The current report focuses on the third option in which point-kernel gamma-ray shielding calculations were performed for a variety of shield configurations for both nuclear electric propulsion (NEP) and nuclear thermal rocket (NTR) vehicles. For a returning NTR vehicle, calculations indicate that a 14.9 MT shield would be needed to limit the integrated crew exposure to no more than 0.05 Sv over a period of six months (25 percent of the allowable exposure to man-made radiation sources). During periods of low vehicular activity in LEO, the shield may be redeployed about the SS habitation module in order to decrease crew exposures to trapped proton radiations by approximately a factor of 10. The corresponding shield mass required for deployment at a returning NEP vehicle is 2.21 MT. Additional scenarios examined include the radioactivation of various metals as might be found in tools used in EVA activities.

  4. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.

  5. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  6. Nonlocal and Nonadiabatic Effects in the Charge-Density Response of Solids: A Time-Dependent Density-Functional Approach

    NASA Astrophysics Data System (ADS)

    Panholzer, Martin; Gatti, Matteo; Reining, Lucia

    2018-04-01

    The charge-density response of extended materials is usually dominated by the collective oscillation of electrons, the plasmons. Beyond this feature, however, intriguing many-body effects are observed. They cannot be described by one of the most widely used approaches for the calculation of dielectric functions, which is time-dependent density functional theory (TDDFT) in the adiabatic local density approximation (ALDA). Here, we propose an approximation to the TDDFT exchange-correlation kernel which is nonadiabatic and nonlocal. It is extracted from correlated calculations in the homogeneous electron gas, where we have tabulated it for a wide range of wave vectors and frequencies. A simple mean density approximation allows one to use it in inhomogeneous materials where the density varies on a scale of 1.6 rs or faster. This kernel contains effects that are completely absent in the ALDA; in particular, it correctly describes the double plasmon in the dynamic structure factor of sodium, and it shows the characteristic low-energy peak that appears in systems with low electronic density. It also leads to an overall quantitative improvement of spectra.

  7. Nonlocal and Nonadiabatic Effects in the Charge-Density Response of Solids: A Time-Dependent Density-Functional Approach.

    PubMed

    Panholzer, Martin; Gatti, Matteo; Reining, Lucia

    2018-04-20

    The charge-density response of extended materials is usually dominated by the collective oscillation of electrons, the plasmons. Beyond this feature, however, intriguing many-body effects are observed. They cannot be described by one of the most widely used approaches for the calculation of dielectric functions, which is time-dependent density functional theory (TDDFT) in the adiabatic local density approximation (ALDA). Here, we propose an approximation to the TDDFT exchange-correlation kernel which is nonadiabatic and nonlocal. It is extracted from correlated calculations in the homogeneous electron gas, where we have tabulated it for a wide range of wave vectors and frequencies. A simple mean density approximation allows one to use it in inhomogeneous materials where the density varies on a scale of 1.6 r_{s} or faster. This kernel contains effects that are completely absent in the ALDA; in particular, it correctly describes the double plasmon in the dynamic structure factor of sodium, and it shows the characteristic low-energy peak that appears in systems with low electronic density. It also leads to an overall quantitative improvement of spectra.

  8. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  9. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  10. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  11. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize.

    PubMed

    Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.

  12. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize

    PubMed Central

    Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143

  13. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    DOE PAGES

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel; ...

    2017-06-01

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less

  14. Influence of velocity effects on the shape of N2 (and air) broadened H2O lines revisited with classical molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Ngo, N. H.; Tran, H.; Gamache, R. R.; Bermejo, D.; Domenech, J.-L.

    2012-08-01

    The modeling of the shape of H2O lines perturbed by N2 (and air) using the Keilson-Storer (KS) kernel for collision-induced velocity changes is revisited with classical molecular dynamics simulations (CMDS). The latter have been performed for a large number of molecules starting from intermolecular-potential surfaces. Contrary to the assumption made in a previous study [H. Tran, D. Bermejo, J.-L. Domenech, P. Joubert, R. R. Gamache, and J.-M. Hartmann, J. Quant. Spectrosc. Radiat. Transf. 108, 126 (2007)], 10.1016/j.jqsrt.2007.03.009, the results of these CMDS show that the velocity-orientation and -modulus changes statistically occur at the same time scale. This validates the use of a single memory parameter in the Keilson-Storer kernel to describe both the velocity-orientation and -modulus changes. The CMDS results also show that velocity- and rotational state-changing collisions are statistically partially correlated. A partially correlated speed-dependent Keilson-Storer model has thus been used to describe the line-shape. For this, the velocity changes KS kernel parameters have been directly determined from CMDS, while the speed-dependent broadening and shifting coefficients have been calculated with a semi-classical approach. Comparisons between calculated spectra and measurements of several lines of H2O broadened by N2 (and air) in the ν3 and 2ν1 + ν2 + ν3 bands for a wide range of pressure show very satisfactory agreement. The evolution of non-Voigt effects from Doppler to collisional regimes is also presented and discussed.

  15. Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization

    PubMed Central

    Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu

    2012-01-01

    When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance. PMID:22778600

  16. A fast algorithm for forward-modeling of gravitational fields in spherical coordinates with 3D Gauss-Legendre quadrature

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.

    2017-12-01

    Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.

  17. Palmprint and face multi-modal biometric recognition based on SDA-GSVD and its kernelization.

    PubMed

    Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu

    2012-01-01

    When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.

  18. A High Performance Block Eigensolver for Nuclear Configuration Interaction Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aktulga, Hasan Metin; Afibuzzaman, Md.; Williams, Samuel

    As on-node parallelism increases and the performance gap between the processor and the memory system widens, achieving high performance in large-scale scientific applications requires an architecture-aware design of algorithms and solvers. We focus on the eigenvalue problem arising in nuclear Configuration Interaction (CI) calculations, where a few extreme eigenpairs of a sparse symmetric matrix are needed. Here, we consider a block iterative eigensolver whose main computational kernels are the multiplication of a sparse matrix with multiple vectors (SpMM), and tall-skinny matrix operations. We then present techniques to significantly improve the SpMM and the transpose operation SpMM T by using themore » compressed sparse blocks (CSB) format. We achieve 3-4× speedup on the requisite operations over good implementations with the commonly used compressed sparse row (CSR) format. We develop a performance model that allows us to correctly estimate the performance of our SpMM kernel implementations, and we identify cache bandwidth as a potential performance bottleneck beyond DRAM. We also analyze and optimize the performance of LOBPCG kernels (inner product and linear combinations on multiple vectors) and show up to 15× speedup over using high performance BLAS libraries for these operations. The resulting high performance LOBPCG solver achieves 1.4× to 1.8× speedup over the existing Lanczos solver on a series of CI computations on high-end multicore architectures (Intel Xeons). We also analyze the performance of our techniques on an Intel Xeon Phi Knights Corner (KNC) processor.« less

  19. 7 CFR 810.802 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...

  20. Effect of different CT scanners and settings on femoral failure loads calculated by finite element models.

    PubMed

    Eggermont, Florieke; Derikx, Loes C; Free, Jeffrey; van Leeuwen, Ruud; van der Linden, Yvette M; Verdonschot, Nico; Tanck, Esther

    2018-03-06

    In a multi-center patient study, using different CT scanners, CT-based finite element (FE) models are utilized to calculate failure loads of femora with metastases. Previous studies showed that using different CT scanners can result in different outcomes. This study aims to quantify the effects of (i) different CT scanners; (ii) different CT protocols with variations in slice thickness, field of view (FOV), and reconstruction kernel; and (iii) air between calibration phantom and patient, on Hounsfield Units (HU), bone mineral density (BMD), and FE failure load. Six cadaveric femora were scanned on four CT scanners. Scans were made with multiple CT protocols and with or without an air gap between the body model and calibration phantom. HU and calibrated BMD were determined in cortical and trabecular regions of interest. Non-linear isotropic FE models were constructed to calculate failure load. Mean differences between CT scanners varied up to 7% in cortical HU, 6% in trabecular HU, 6% in cortical BMD, 12% in trabecular BMD, and 17% in failure load. Changes in slice thickness and FOV had little effect (≤4%), while reconstruction kernels had a larger effect on HU (16%), BMD (17%), and failure load (9%). Air between the body model and calibration phantom slightly decreased the HU, BMD, and failure loads (≤8%). In conclusion, this study showed that quantitative analysis of CT images acquired with different CT scanners, and particularly reconstruction kernels, can induce relatively large differences in HU, BMD, and failure loads. Additionally, if possible, air artifacts should be avoided. © 2018 Orthopaedic Research Society. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society. J Orthop Res. © 2018 The Authors. Journal of Orthopaedic Research® Published by Wiley Periodicals, Inc. on behalf of the Orthopaedic Research Society.

  1. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  2. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  3. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  4. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  5. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  6. Extraction of azadirachtin A from neem seed kernels by supercritical fluid and its evaluation by HPLC and LC/MS.

    PubMed

    Ambrosino, P; Fresa, R; Fogliano, V; Monti, S M; Ritieni, A

    1999-12-01

    A new supercritical extraction methodology was applied to extract azadirachtin A (AZA-A) from neem seed kernels. Supercritical and liquid carbon dioxide (CO(2)) were used as extractive agents in a three-separation-stage supercritical pilot plant. Subcritical conditions were tested too. Comparisons were carried out by calculating the efficiency of the pilot plant with respect to the milligrams per kilogram of seeds (ms/mo) of AZA-A extracted. The most convenient extraction was gained using an ms/mo ratio of 119 rather than 64. For supercritical extraction, a separation of cuticular waxes from oil was set up in the pilot plant. HPLC and electrospray mass spectroscopy were used to monitor the yield of AZA-A extraction.

  7. Production of near-full density uranium nitride microspheres with a hot isostatic press

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMurray, Jacob W.; Kiggans, Jr., Jim O.; Helmreich, Grant W.

    Depleted uranium nitride (UN) kernels with diameters ranging from 420 to 858 microns and theoretical densities (TD) between 87 and 91 percent were postprocessed using a hot isostatic press (HIP) in an argon gas media. This treatment was shown to increase the TD up to above 97%. Uranium nitride is highly reactive with oxygen. Therefore, a novel crucible design was implemented to remove impurities in the argon gas via in situ gettering to avoid oxidation of the UN kernels. The density before and after each HIP procedure was calculated from average weight, volume, and ellipticity determined with established characterization techniquesmore » for particle. Furthermore, micrographs confirmed the nearly full densification of the particles using the gettering approach and HIP processing parameters investigated in this work.« less

  8. The accuracy of the out-of-field dose calculations using a model based algorithm in a commercial treatment planning system

    NASA Astrophysics Data System (ADS)

    Wang, Lilie; Ding, George X.

    2014-07-01

    The out-of-field dose can be clinically important as it relates to the dose of the organ-at-risk, although the accuracy of its calculation in commercial radiotherapy treatment planning systems (TPSs) receives less attention. This study evaluates the uncertainties of out-of-field dose calculated with a model based dose calculation algorithm, anisotropic analytical algorithm (AAA), implemented in a commercial radiotherapy TPS, Varian Eclipse V10, by using Monte Carlo (MC) simulations, in which the entire accelerator head is modeled including the multi-leaf collimators. The MC calculated out-of-field doses were validated by experimental measurements. The dose calculations were performed in a water phantom as well as CT based patient geometries and both static and highly modulated intensity-modulated radiation therapy (IMRT) fields were evaluated. We compared the calculated out-of-field doses, defined as lower than 5% of the prescription dose, in four H&N cancer patients and two lung cancer patients treated with volumetric modulated arc therapy (VMAT) and IMRT techniques. The results show that the discrepancy of calculated out-of-field dose profiles between AAA and the MC depends on the depth and is generally less than 1% for in water phantom comparisons and in CT based patient dose calculations for static field and IMRT. In cases of VMAT plans, the difference between AAA and MC is <0.5%. The clinical impact resulting from the error on the calculated organ doses were analyzed by using dose-volume histograms. Although the AAA algorithm significantly underestimated the out-of-field doses, the clinical impact on the calculated organ doses in out-of-field regions may not be significant in practice due to very low out-of-field doses relative to the target dose.

  9. Modeling and Analysis of FCM UN TRISO Fuel Using the PARFUME Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaise Collin

    2013-09-01

    The PARFUME (PARticle Fuel ModEl) modeling code was used to assess the overall fuel performance of uranium nitride (UN) tri-structural isotropic (TRISO) ceramic fuel in the frame of the design and development of Fully Ceramic Matrix (FCM) fuel. A specific modeling of a TRISO particle with UN kernel was developed with PARFUME, and its behavior was assessed in irradiation conditions typical of a Light Water Reactor (LWR). The calculations were used to access the dimensional changes of the fuel particle layers and kernel, including the formation of an internal gap. The survivability of the UN TRISO particle was estimated dependingmore » on the strain behavior of the constituent materials at high fast fluence and burn-up. For nominal cases, internal gas pressure and representative thermal profiles across the kernel and layers were determined along with stress levels in the pyrolytic carbon (PyC) and silicon carbide (SiC) layers. These parameters were then used to evaluate fuel particle failure probabilities. Results of the study show that the survivability of UN TRISO fuel under LWR irradiation conditions might only be guaranteed if the kernel and PyC swelling rates are limited at high fast fluence and burn-up. These material properties are unknown at the irradiation levels expected to be reached by UN TRISO fuel in LWRs. Therefore, more effort is needed to determine them and positively conclude on the applicability of FCM fuel to LWRs.« less

  10. QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.

    2013-08-01

    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).

  11. Optimization method of superpixel analysis for multi-contrast Jones matrix tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki

    2017-02-01

    Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.

  12. Quantitative assessment of the accuracy of dose calculation using pencil beam and Monte Carlo algorithms and requirements for clinical quality assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Imad, E-mail: iali@ouhsc.edu; Ahmad, Salahuddin

    2013-10-01

    To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatmentmore » sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a systematic lack of dose coverage. The dose calculated by PB for lung tumors was overestimated by up to 40%. An interesting feature that was observed is that despite large discrepancies in dose-volume histogram coverage of the planning target volume between PB and MC, the point doses at the isocenter (center of the lesions) calculated by both algorithms were within 7% even for lung cases. The dose distributions measured with EBT GAFCHROMIC films in heterogeneous phantoms showed large discrepancies of nearly 15% lower than PB at interfaces between heterogeneous media, where these lower doses measured by the film were in agreement with those by MC. The doses (V95) calculated by MC and PB agreed within 5% for treatment sites with small tissue heterogeneities such as the prostate, brain, head and neck, and paraspinal tumors. Considerable discrepancies, up to 40%, were observed in the dose-volume coverage between MC and PB in lung tumors, which may affect clinical outcomes. The discrepancies between MC and PB increased for 15 MV compared with 6 MV indicating the importance of implementation of accurate clinical treatment planning such as MC. The comparison of point doses is not representative of the discrepancies in dose coverage and might be misleading in evaluating the accuracy of dose calculation between PB and MC. Thus, the clinical quality assurance procedures required to verify the accuracy of dose calculation using PB and MC need to consider measurements of 2- and 3-dimensional dose distributions rather than a single point measurement using heterogeneous phantoms instead of homogenous water-equivalent phantoms.« less

  13. Comparison of optimized single and multifield irradiation plans of antiproton, proton and carbon ion beams.

    PubMed

    Bassler, Niels; Kantemiris, Ioannis; Karaiskos, Pantelis; Engelke, Julia; Holzscheiter, Michael H; Petersen, Jørgen B

    2010-04-01

    Antiprotons have been suggested as a possibly superior modality for radiotherapy, due to the energy released when antiprotons annihilate, which enhances the Bragg peak and introduces a high-LET component to the dose. However, concerns are expressed about the inferior lateral dose distribution caused by the annihilation products. We use the Monte Carlo code FLUKA to generate depth-dose kernels for protons, antiprotons, and carbon ions. Using these we then build virtual treatment plans optimized according to ICRU recommendations for the different beam modalities, which then are recalculated with FLUKA. Dose-volume histograms generated from these plans can be used to compare the different irradiations. The enhancement in physical and possibly biological dose from annihilating antiprotons can significantly lower the dose in the entrance channel; but only at the expense of a diffuse low dose background from long-range secondary particles. Lateral dose distributions are improved using active beam delivery methods, instead of flat fields. Dose-volume histograms for different treatment scenarios show that antiprotons have the potential to reduce the volume of normal tissue receiving medium to high dose, however, in the low dose region antiprotons are inferior to both protons and carbon ions. This limits the potential usage to situations where dose to normal tissue must be reduced as much as possible. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  15. The influence of the dose calculation resolution of VMAT plans on the calculated dose for eye lens and optic pathway.

    PubMed

    Park, Jong Min; Park, So-Yeon; Kim, Jung-In; Carlson, Joel; Kim, Jin Ho

    2017-03-01

    To investigate the effect of dose calculation grid on calculated dose-volumetric parameters for eye lenses and optic pathways. A total of 30 patients treated using the volumetric modulated arc therapy (VMAT) technique, were retrospectively selected. For each patient, dose distributions were calculated with calculation grids ranging from 1 to 5 mm at 1 mm intervals. Identical structures were used for VMAT planning. The changes in dose-volumetric parameters according to the size of the calculation grid were investigated. Compared to dose calculation with 1 mm grid, the maximum doses to the eye lens with calculation grids of 2, 3, 4 and 5 mm increased by 0.2 ± 0.2 Gy, 0.5 ± 0.5 Gy, 0.9 ± 0.8 Gy and 1.7 ± 1.5 Gy on average, respectively. The Spearman's correlation coefficient between dose gradients near structures vs. the differences between the calculated doses with 1 mm grid and those with 5 mm grid, were 0.380 (p < 0.001). For the accurate calculation of dose distributions, as well as efficiency, using a grid size of 2 mm appears to be the most appropriate choice.

  16. SU-F-T-450: The Investigation of Radiotherapy Quality Assurance and Automatic Treatment Planning Based On the Kernel Density Estimation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, J; Fan, J; Hu, W

    Purpose: To develop a fast automatic algorithm based on the two dimensional kernel density estimation (2D KDE) to predict the dose-volume histogram (DVH) which can be employed for the investigation of radiotherapy quality assurance and automatic treatment planning. Methods: We propose a machine learning method that uses previous treatment plans to predict the DVH. The key to the approach is the framing of DVH in a probabilistic setting. The training consists of estimating, from the patients in the training set, the joint probability distribution of the dose and the predictive features. The joint distribution provides an estimation of the conditionalmore » probability of the dose given the values of the predictive features. For the new patient, the prediction consists of estimating the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimation of the DVH. The 2D KDE is implemented to predict the joint probability distribution of the training set and the distribution of the predictive features for the new patient. Two variables, including the signed minimal distance from each OAR (organs at risk) voxel to the target boundary and its opening angle with respect to the origin of voxel coordinate, are considered as the predictive features to represent the OAR-target spatial relationship. The feasibility of our method has been demonstrated with the rectum, breast and head-and-neck cancer cases by comparing the predicted DVHs with the planned ones. Results: The consistent result has been found between these two DVHs for each cancer and the average of relative point-wise differences is about 5% within the clinical acceptable extent. Conclusion: According to the result of this study, our method can be used to predict the clinical acceptable DVH and has ability to evaluate the quality and consistency of the treatment planning.« less

  17. Development of an accurate EPID-based output measurement and dosimetric verification tool for electron beam therapy.

    PubMed

    Ding, Aiping; Xing, Lei; Han, Bin

    2015-07-01

    To develop an efficient and robust tool for output measurement and absolute dose verification of electron beam therapy by using a high spatial-resolution and high frame-rate amorphous silicon flat panel electronic portal imaging device (EPID). The dosimetric characteristics of the EPID, including saturation, linearity, and ghosting effect, were first investigated on a Varian Clinac 21EX accelerator. The response kernels of the individual pixels of the EPID to all available electron energies (6, 9, 12, 16, and 20 MeV) were calculated by using Monte Carlo (MC) simulations, which formed the basis to deconvolve an EPID raw images to the incident electron fluence map. The two-dimensional (2D) dose distribution at reference depths in water was obtained by using the constructed fluence map with a MC simulated pencil beam kernel with consideration of the geometric and structural information of the EPID. Output factor measurements were carried out with the EPID at a nominal source-surface distance of 100 cm for 2 × 2, 3 × 3, 6 × 6, 10 × 10, and 15 × 15 cm(2) fields for all available electron energies, and the results were compared with that measured in a solid water phantom using film and a Farmer-type ion chamber. The dose distributions at a reference depth specific to each energy and the flatness and symmetry of the 10 × 10 cm(2) electron beam were also measured using EPID, and the results were compared with ion chamber array and water scan measurements. Finally, three patient cases with various field sizes and irregular cutout shapes were also investigated. EPID-measured dose changed linearly with the monitor units and showed little ghosting effect for dose rate up to 600 MU/min. The flatness and symmetry measured with the EPID were found to be consistent with ion chamber array and water scan measurements. The EPID-measured output factors for standard square fields of 2 × 2, 3 × 3, 6 × 6, 10 × 10, 15 × 15 cm(2) agreed with film and ion chamber measurements. The average discrepancy between EPID and ion chamber/film measurements was 0.81% ± 0.60% (SD) and 1.34% ± 0.75%, respectively. For the three clinical cases, the difference in output between the EPID- and ion chamber array measured values was found to be 1.13% ± 0.11%, 0.54% ± 0.10%, and 0.74% ± 0.11%, respectively. Furthermore, the γ-index analysis showed an excellent agreement between the EPID- and ion chamber array measured dose distributions: 100% of the pixels passed the criteria of 3%/3 mm. When the γ-index was set to be 2%/2 mm, the pass rate was found to be 99.0% ± 0.07%, 98.2% ± 0.14%, and 100% for the three cases. The EPID dosimetry system developed in this work provides an accurate and reliable tool for routine output measurement and dosimetric verification of electron beam therapy. Coupled with its portability and ease of use, the proposed system promises to replace the current film-based approach for fast and reliable assessment of small and irregular electron field dosimetry.

  18. SU-E-T-374: Evaluation and Verification of Dose Calculation Accuracy with Different Dose Grid Sizes for Intracranial Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, C; Schultheiss, T

    Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) weremore » used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.« less

  19. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  20. Photon Counting Computed Tomography With Dedicated Sharp Convolution Kernels: Tapping the Potential of a New Technology for Stent Imaging.

    PubMed

    von Spiczak, Jochen; Mannil, Manoj; Peters, Benjamin; Hickethier, Tilman; Baer, Matthias; Henning, André; Schmidt, Bernhard; Flohr, Thomas; Manka, Robert; Maintz, David; Alkadhi, Hatem

    2018-05-23

    The aims of this study were to assess the value of a dedicated sharp convolution kernel for photon counting detector (PCD) computed tomography (CT) for coronary stent imaging and to evaluate to which extent iterative reconstructions can compensate for potential increases in image noise. For this in vitro study, a phantom simulating coronary artery stenting was prepared. Eighteen different coronary stents were expanded in plastic tubes of 3 mm diameter. Tubes were filled with diluted contrast agent, sealed, and immersed in oil calibrated to an attenuation of -100 HU simulating epicardial fat. The phantom was scanned in a modified second generation 128-slice dual-source CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Erlangen, Germany) equipped with both a conventional energy integrating detector and PCD. Image data were acquired using the PCD part of the scanner with 48 × 0.25 mm slices, a tube voltage of 100 kVp, and tube current-time product of 100 mAs. Images were reconstructed using a conventional convolution kernel for stent imaging with filtered back-projection (B46) and with sinogram-affirmed iterative reconstruction (SAFIRE) at level 3 (I463). For comparison, a dedicated sharp convolution kernel with filtered back-projection (D70) and SAFIRE level 3 (Q703) and level 5 (Q705) was used. The D70 and Q70 kernels were specifically designed for coronary stent imaging with PCD CT by optimizing the image modulation transfer function and the separation of contrast edges. Two independent, blinded readers evaluated subjective image quality (Likert scale 0-3, where 3 = excellent), in-stent diameter difference, in-stent attenuation difference, mathematically defined image sharpness, and noise of each reconstruction. Interreader reliability was calculated using Goodman and Kruskal's γ and intraclass correlation coefficients (ICCs). Differences in image quality were evaluated using a Wilcoxon signed-rank test. Differences in in-stent diameter difference, in-stent attenuation difference, image sharpness, and image noise were tested using a paired-sample t test corrected for multiple comparisons. Interreader and intrareader reliability were excellent (γ = 0.953, ICCs = 0.891-0.999, and γ = 0.996, ICCs = 0.918-0.999, respectively). Reconstructions using the dedicated sharp convolution kernel yielded significantly better results regarding image quality (B46: 0.4 ± 0.5 vs D70: 2.9 ± 0.3; P < 0.001), in-stent diameter difference (1.5 ± 0.3 vs 1.0 ± 0.3 mm; P < 0.001), and image sharpness (728 ± 246 vs 2069 ± 411 CT numbers/voxel; P < 0.001). Regarding in-stent attenuation difference, no significant difference was observed between the 2 kernels (151 ± 76 vs 158 ± 92 CT numbers; P = 0.627). Noise was significantly higher in all sharp convolution kernel images but was reduced by 41% and 59% by applying SAFIRE levels 3 and 5, respectively (B46: 16 ± 1, D70: 111 ± 3, Q703: 65 ± 2, Q705: 46 ± 2 CT numbers; P < 0.001 for all comparisons). A dedicated sharp convolution kernel for PCD CT imaging of coronary stents yields superior qualitative and quantitative image characteristics compared with conventional reconstruction kernels. Resulting higher noise levels in sharp kernel PCD imaging can be partially compensated with iterative image reconstruction techniques.

  1. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  2. SU-F-T-441: Dose Calculation Accuracy in CT Images Reconstructed with Artifact Reduction Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, C; Chan, S; Lee, F

    Purpose: Accuracy of radiotherapy dose calculation in patients with surgical implants is complicated by two factors. First is the accuracy of CT number, second is the dose calculation accuracy. We compared measured dose with dose calculated on CT images reconstructed with FBP and an artifact reduction algorithm (OMAR, Philips) for a phantom with high density inserts. Dose calculation were done with Varian AAA and AcurosXB. Methods: A phantom was constructed with solid water in which 2 titanium or stainless steel rods could be inserted. The phantom was scanned with the Philips Brillance Big Bore CT. Image reconstruction was done withmore » FBP and OMAR. Two 6 MV single field photon plans were constructed for each phantom. Radiochromic films were placed at different locations to measure the dose deposited. One plan has normal incidence on the titanium/steel rods. In the second plan, the beam is at almost glancing incidence on the metal rods. Measurements were then compared with dose calculated with AAA and AcurosXB. Results: The use of OMAR images slightly improved the dose calculation accuracy. The agreement between measured and calculated dose was best with AXB and image reconstructed with OMAR. Dose calculated on titanium phantom has better agreement with measurement. Large discrepancies were seen at points directly above and below the high density inserts. Both AAA and AXB underestimated the dose directly above the metal surface, while overestimated the dose below the metal surface. Doses measured downstream of metal were all within 3% of calculated values. Conclusion: When doing treatment planning for patients with metal implants, care must be taken to acquire correct CT images to improve dose calculation accuracy. Moreover, great discrepancies in measured and calculated dose were observed at metal/tissue interface. Care must be taken in estimating the dose in critical structures that come into contact with metals.« less

  3. Dose specification for radiation therapy: dose to water or dose to medium?

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, Jinsheng

    2011-05-01

    The Monte Carlo method enables accurate dose calculation for radiation therapy treatment planning and has been implemented in some commercial treatment planning systems. Unlike conventional dose calculation algorithms that provide patient dose information in terms of dose to water with variable electron density, the Monte Carlo method calculates the energy deposition in different media and expresses dose to a medium. This paper discusses the differences in dose calculated using water with different electron densities and that calculated for different biological media and the clinical issues on dose specification including dose prescription and plan evaluation using dose to water and dose to medium. We will demonstrate that conventional photon dose calculation algorithms compute doses similar to those simulated by Monte Carlo using water with different electron densities, which are close (<4% differences) to doses to media but significantly different (up to 11%) from doses to water converted from doses to media following American Association of Physicists in Medicine (AAPM) Task Group 105 recommendations. Our results suggest that for consistency with previous radiation therapy experience Monte Carlo photon algorithms report dose to medium for radiotherapy dose prescription, treatment plan evaluation and treatment outcome analysis.

  4. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  5. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  7. Chemical properties and oxidative stability of Arjan (Amygdalus reuteri) kernel oil as emerging edible oil.

    PubMed

    Tavakoli, Javad; Emadi, Teymour; Hashemi, Seyed Mohammad Bagher; Mousavi Khaneghah, Amin; Munekata, Paulo Eduardo Sichetti; Lorenzo, Jose Manuel; Brnčić, Mladen; Barba, Francisco J

    2018-05-01

    The oxidative stability, as well as the chemical composition of Amygdalus reuteri kernel oil (ARKO), were evaluated and compared to those of Amygdalus scoparia kernel oil (ASKO) and extra virgin olive oil (EVOO) during and after holding in the oven (170 °C for 8 h). The oxidative stability analysis was carried out by measuring the changes in conjugated dienes, carbonyl and acid values as well as oil/oxidative stability index and their correlation with the antioxidant compounds (tocopherol, polyphenols, and sterol compounds). The oleic acid was determined as the predominant fatty acid of ARKO (65.5%). Calculated oxidizability value and an iodine value of ARKO, ASKO and EVOO were reported as 3.29 and 3.24, 2.00 and 100.0, 101.4 and 81.9, respectively. Due to the high wax content (4.5% and 3.3%, respectively), the saponification number of ARKO and ASKO (96.4 and 99.8, respectively) was lower than that of EVOO (169.7). ARKO had the highest oxidative stability, followed by ASKO and EVOO. Therefore, ARKO can be introduced as a new source of edible oil with high oxidative stability. Copyright © 2018. Published by Elsevier Ltd.

  8. Forced Ignition Study Based On Wavelet Method

    NASA Astrophysics Data System (ADS)

    Martelli, E.; Valorani, M.; Paolucci, S.; Zikoski, Z.

    2011-05-01

    The control of ignition in a rocket engine is a critical problem for combustion chamber design. Therefore it is essential to fully understand the mechanism of ignition during its earliest stages. In this paper the characteristics of flame kernel formation and initial propagation in a hydrogen-argon-oxygen mixing layer are studied using 2D direct numerical simulations with detailed chemistry and transport properties. The flame kernel is initiated by adding an energy deposition source term in the energy equation. The effect of unsteady strain rate is studied by imposing a 2D turbulence velocity field, which is initialized by means of a synthetic field. An adaptive wavelet method, based on interpolating wavelets is used in this study to solve the compressible reactive Navier- Stokes equations. This method provides an alternative means to refine the computational grid points according to local demands of the physical solution. The present simulations show that in the very early instants the kernel perturbed by the turbulent field is characterized by an increased burning area and a slightly increased rad- ical formation. In addition, the calculations show that the wavelet technique yields a significant reduction in the number of degrees of freedom necessary to achieve a pre- scribed solution accuracy.

  9. Small-scale modification to the lensing kernel

    NASA Astrophysics Data System (ADS)

    Hadzhiyska, Boryana; Spergel, David; Dunkley, Joanna

    2018-02-01

    Calculations of the cosmic microwave background (CMB) lensing power implemented into the standard cosmological codes such as camb and class usually treat the surface of last scatter as an infinitely thin screen. However, since the CMB anisotropies are smoothed out on scales smaller than the diffusion length due to the effect of Silk damping, the photons which carry information about the small-scale density distribution come from slightly earlier times than the standard recombination time. The dominant effect is the scale dependence of the mean redshift associated with the fluctuations during recombination. We find that fluctuations at k =0.01 Mpc-1 come from a characteristic redshift of z ≈1090 , while fluctuations at k =0.3 Mpc-1 come from a characteristic redshift of z ≈1130 . We then estimate the corrections to the lensing kernel and the related power spectra due to this effect. We conclude that neglecting it would result in a deviation from the true value of the lensing kernel at the half percent level at small CMB scales. For an all-sky, noise-free experiment, this corresponds to a ˜0.1 σ shift in the observed temperature power spectrum on small scales (2500 ≲l ≲4000 ).

  10. Heat kernel and Weyl anomaly of Schrödinger invariant theory

    NASA Astrophysics Data System (ADS)

    Pal, Sridip; Grinstein, Benjamín

    2017-12-01

    We propose a method inspired by discrete light cone quantization to determine the heat kernel for a Schrödinger field theory (Galilean boost invariant with z =2 anisotropic scaling symmetry) living in d +1 dimensions, coupled to a curved Newton-Cartan background, starting from a heat kernel of a relativistic conformal field theory (z =1 ) living in d +2 dimensions. We use this method to show that the Schrödinger field theory of a complex scalar field cannot have any Weyl anomalies. To be precise, we show that the Weyl anomaly Ad+1 G for Schrödinger theory is related to the Weyl anomaly of a free relativistic scalar CFT Ad+2 R via Ad+1 G=2 π δ (m )Ad+2 R , where m is the charge of the scalar field under particle number symmetry. We provide further evidence of the vanishing anomaly by evaluating Feynman diagrams in all orders of perturbation theory. We present an explicit calculation of the anomaly using a regulated Schrödinger operator, without using the null cone reduction technique. We generalize our method to show that a similar result holds for theories with a single time-derivative and with even z >2 .

  11. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  12. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  13. Genotype-based dosage of acenocoumarol in highly-sensitive geriatric patients.

    PubMed

    Lozano, Roberto; Franco, María-Esther; López, Luis; Moneva, Juan-José; Carrasco, Vicente; Pérez-Layo, Maria-Angeles

    2015-03-01

    Our aim was to determinate the acenocoumarol dose requirement in highly sensitive geriatric patients, based on a minimum of genotype (VKORC1 and CYP2C9) data. We used a Gaussian kernel density estimation test to identify patients highly sensitive to the drug and PHARMACHIP®-Cuma test (Progenika Biopharma, SA, Grifols, Spain) to determine the CYP2C9 and VKORC1 genotype. All highly sensitive geriatric patients were taking ≤5.6 mg/week of acenocoumarol (AC), and 86% of these patients presented the following genotypes: CYP2C9*1/*3 or CYP2C9*1/*2 plus VKORC1 A/G, CYP2C9*3/*3, or VKORC1 A/A. VKORC1 A and CYP2C9*2 and/or *3 allelic variants extremely influence on AC dose requirement of highly sensitive geriatric patients. These patients display acenocoumarol dose requirement of ≤5.6 mg/week.

  14. Theoretical detection threshold of the proton-acoustic range verification technique.

    PubMed

    Ahmad, Moiz; Xiang, Liangzhong; Yousefi, Siavash; Xing, Lei

    2015-10-01

    Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method. Various beam pulse widths (0.1-10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. The calculated noise in the transducer was 12-28 mPa, depending on the transducer central frequency (70-380 kHz). The minimum number of protons detectable by the technique was on the order of 3-30 × 10(6) per pulse, with 30-800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 10(6) protons/pulse and beam current.

  15. Theoretical detection threshold of the proton-acoustic range verification technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, Moiz; Yousefi, Siavash; Xing, Lei, E-mail: lei@stanford.edu

    2015-10-15

    Purpose: Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. Methods: An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method.more » Various beam pulse widths (0.1–10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. Results: The calculated noise in the transducer was 12–28 mPa, depending on the transducer central frequency (70–380 kHz). The minimum number of protons detectable by the technique was on the order of 3–30 × 10{sup 6} per pulse, with 30–800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. Conclusions: The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 10{sup 6} protons/pulse and beam current.« less

  16. Theoretical detection threshold of the proton-acoustic range verification technique

    PubMed Central

    Ahmad, Moiz; Xiang, Liangzhong; Yousefi, Siavash; Xing, Lei

    2015-01-01

    Purpose: Range verification in proton therapy using the proton-acoustic signal induced in the Bragg peak was investigated for typical clinical scenarios. The signal generation and detection processes were simulated in order to determine the signal-to-noise limits. Methods: An analytical model was used to calculate the dose distribution and local pressure rise (per proton) for beams of different energy (100 and 160 MeV) and spot widths (1, 5, and 10 mm) in a water phantom. In this method, the acoustic waves propagating from the Bragg peak were generated by the general 3D pressure wave equation implemented using a finite element method. Various beam pulse widths (0.1–10 μs) were simulated by convolving the acoustic waves with Gaussian kernels. A realistic PZT ultrasound transducer (5 cm diameter) was simulated with a Butterworth bandpass filter with consideration of random noise based on a model of thermal noise in the transducer. The signal-to-noise ratio on a per-proton basis was calculated, determining the minimum number of protons required to generate a detectable pulse. The maximum spatial resolution of the proton-acoustic imaging modality was also estimated from the signal spectrum. Results: The calculated noise in the transducer was 12–28 mPa, depending on the transducer central frequency (70–380 kHz). The minimum number of protons detectable by the technique was on the order of 3–30 × 106 per pulse, with 30–800 mGy dose per pulse at the Bragg peak. Wider pulses produced signal with lower acoustic frequencies, with 10 μs pulses producing signals with frequency less than 100 kHz. Conclusions: The proton-acoustic process was simulated using a realistic model and the minimal detection limit was established for proton-acoustic range validation. These limits correspond to a best case scenario with a single large detector with no losses and detector thermal noise as the sensitivity limiting factor. Our study indicated practical proton-acoustic range verification may be feasible with approximately 5 × 106 protons/pulse and beam current. PMID:26429247

  17. Visible Motion Blur

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor); Ahumada, Albert J. (Inventor)

    2014-01-01

    A method of measuring motion blur is disclosed comprising obtaining a moving edge temporal profile r(sub 1)(k) of an image of a high-contrast moving edge, calculating the masked local contrast m(sub1)(k) for r(sub 1)(k) and the masked local contrast m(sub 2)(k) for an ideal step edge waveform r(sub 2)(k) with the same amplitude as r(sub 1)(k), and calculating the measure or motion blur Psi as a difference function, The masked local contrasts are calculated using a set of convolution kernels scaled to simulate the performance of the human visual system, and Psi is measured in units of just-noticeable differences.

  18. Employing OpenCL to Accelerate Ab Initio Calculations on Graphics Processing Units.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2017-06-13

    We present an extension of our graphics processing units (GPU)-accelerated quantum chemistry package to employ OpenCL compute kernels, which can be executed on a wide range of computing devices like CPUs, Intel Xeon Phi, and AMD GPUs. Here, we focus on the use of AMD GPUs and discuss differences as compared to CUDA-based calculations on NVIDIA GPUs. First illustrative timings are presented for hybrid density functional theory calculations using serial as well as parallel compute environments. The results show that AMD GPUs are as fast or faster than comparable NVIDIA GPUs and provide a viable alternative for quantum chemical applications.

  19. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  20. 7 CFR 810.2202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...

  1. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  2. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...

  3. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  4. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    PubMed

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. HADOC: a computer code for calculation of external and inhalation doses from acute radionuclide releases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strenge, D.L.; Peloquin, R.A.

    The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure modemore » are also printed if requested.« less

  6. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  7. The Impact of Monte Carlo Dose Calculations on Intensity-Modulated Radiation Therapy

    NASA Astrophysics Data System (ADS)

    Siebers, J. V.; Keall, P. J.; Mohan, R.

    The effect of dose calculation accuracy for IMRT was studied by comparing different dose calculation algorithms. A head and neck IMRT plan was optimized using a superposition dose calculation algorithm. Dose was re-computed for the optimized plan using both Monte Carlo and pencil beam dose calculation algorithms to generate patient and phantom dose distributions. Tumor control probabilities (TCP) and normal tissue complication probabilities (NTCP) were computed to estimate the plan outcome. For the treatment plan studied, Monte Carlo best reproduces phantom dose measurements, the TCP was slightly lower than the superposition and pencil beam results, and the NTCP values differed little.

  8. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  9. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  10. Design of CT reconstruction kernel specifically for clinical lung imaging

    NASA Astrophysics Data System (ADS)

    Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.

    2005-04-01

    In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.

  11. Quality changes in macadamia kernel between harvest and farm-gate.

    PubMed

    Walton, David A; Wallace, Helen M

    2011-02-01

    Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.

  12. Determination of the spatial resolution required for the HEDR dose code. Hanford Environmental Dose Reconstruction Project: Dose code recovery activities, Calculation 007

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, B.A.; Simpson, J.C.

    1992-12-01

    A series of scoping calculations has been undertaken to evaluate the doses that may have been received by individuals living in the vicinity of the Hanford site. This scoping calculation (Calculation 007) examined the spatial distribution of potential doses resulting from releases in the year 1945. This study builds on the work initiated in the first scoping calculation, of iodine in cow`s milk; the third scoping calculation, which added additional pathways; the fifth calculation, which addressed the uncertainty of the dose estimates at a point; and the sixth calculation, which extrapolated the doses throughout the atmospheric transport domain. A projectionmore » of dose to representative individuals throughout the proposed HEDR atmospheric transport domain was prepared on the basis of the HEDR source term. Addressed in this calculation were the contributions to iodine-131 thyroid dose of infants from (1) air submersion and groundshine external dose, (2) inhalation, (3) ingestion of soil by humans, (4) ingestion of leafy vegetables, (5) ingestion of other vegetables and fruits, (6) ingestion of meat, (7) ingestion of eggs, and (8) ingestion of cows` milk from-Feeding Regime 1 as described in scoping calculation 001.« less

  13. Evaluation of the Eclipse eMC algorithm for bolus electron conformal therapy using a standard verification dataset.

    PubMed

    Carver, Robert L; Sprunger, Conrad P; Hogstrom, Kenneth R; Popple, Richard A; Antolak, John A

    2016-05-08

    The purpose of this study was to evaluate the accuracy and calculation speed of electron dose distributions calculated by the Eclipse electron Monte Carlo (eMC) algorithm for use with bolus electron conformal therapy (ECT). The recent com-mercial availability of bolus ECT technology requires further validation of the eMC dose calculation algorithm. eMC-calculated electron dose distributions for bolus ECT have been compared to previously measured TLD-dose points throughout patient-based cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV (planning treatment volume) CT anatomy. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The treatment plans were imported into the Eclipse treatment planning system, and electron dose distributions calculated using 1% and < 0.2% statistical uncertainties. The accuracy of the dose calculations using moderate smoothing and no smooth-ing were evaluated. Dose differences (eMC-calculated less measured dose) were evaluated in terms of absolute dose difference, where 100% equals the given dose, as well as distance to agreement (DTA). Dose calculations were also evaluated for calculation speed. Results from the eMC for the retromolar trigone phantom using 1% statistical uncertainty without smoothing showed calculated dose at 89% (41/46) of the measured TLD-dose points was within 3% dose difference or 3 mm DTA of the measured value. The average dose difference was -0.21%, and the net standard deviation was 2.32%. Differences as large as 3.7% occurred immediately distal to the mandible bone. Results for the nose phantom, using 1% statistical uncertainty without smoothing, showed calculated dose at 93% (53/57) of the measured TLD-dose points within 3% dose difference or 3 mm DTA. The average dose difference was 1.08%, and the net standard deviation was 3.17%. Differences as large as 10% occurred lateral to the nasal air cavities. Including smoothing had insignificant effects on the accuracy of the retromolar trigone phantom calculations, but reduced the accuracy of the nose phantom calculations in the high-gradient dose areas. Dose calculation times with 1% statistical uncertainty for the retromolar trigone and nose treatment plans were 30 s and 24 s, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a framework agent server (FAS). In comparison, the eMC was significantly more accurate than the pencil beam algorithm (PBA). The eMC has comparable accuracy to the pencil beam redefinition algorithm (PBRA) used for bolus ECT planning and has acceptably low dose calculation times. The eMC accuracy decreased when smoothing was used in high-gradient dose regions. The eMC accuracy was consistent with that previously reported for accuracy of the eMC electron dose algorithm and shows that the algorithm is suitable for clinical implementation of bolus ECT.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsuta, Y; Tohoku University Graduate School of Medicine, Sendal, Miyagi; Kadoya, N

    Purpose: In this study, we developed a system to calculate three dimensional (3D) dose that reflects dosimetric error caused by leaf miscalibration for head and neck and prostate volumetric modulated arc therapy (VMAT) without additional treatment planning system calculation on real time. Methods: An original system called clarkson dose calculation based dosimetric error calculation to calculate dosimetric error caused by leaf miscalibration was developed by MATLAB (Math Works, Natick, MA). Our program, first, calculates point doses at isocenter for baseline and modified VMAT plan, which generated by inducing MLC errors that enlarged aperture size of 1.0 mm with clarkson dosemore » calculation. Second, error incuced 3D dose was generated with transforming TPS baseline 3D dose using calculated point doses. Results: Mean computing time was less than 5 seconds. For seven head and neck and prostate plans, between our method and TPS calculated error incuced 3D dose, the 3D gamma passing rates (0.5%/2 mm, global) are 97.6±0.6% and 98.0±0.4%. The dose percentage change with dose volume histogram parameter of mean dose on target volume were 0.1±0.5% and 0.4±0.3%, and with generalized equivalent uniform dose on target volume were −0.2±0.5% and 0.2±0.3%. Conclusion: The erroneous 3D dose calculated by our method is useful to check dosimetric error caused by leaf miscalibration before pre treatment patient QA dosimetry checks.« less

  15. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  16. A high-order strong stability preserving Runge-Kutta method for three-dimensional full waveform modeling and inversion of anelastic models

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Yang, D.; Bao, X.; Li, J.; Zhang, W.

    2017-12-01

    Accurate and efficient forward modeling methods are important for high resolution full waveform inversion. Compared with the elastic case, solving anelastic wave equation requires more computational time, because of the need to compute additional material-independent anelastic functions. A numerical scheme with a large Courant-Friedrichs-Lewy (CFL) condition number enables us to use a large time step to simulate wave propagation, which improves computational efficiency. In this work, we apply the fourth-order strong stability preserving Runge-Kutta method with an optimal CFL coeffiecient to solve the anelastic wave equation. We use a fourth order DRP/opt MacCormack scheme for the spatial discretization, and we approximate the rheological behaviors of the Earth by using the generalized Maxwell body model. With a larger CFL condition number, we find that the computational efficient is significantly improved compared with the traditional fourth-order Runge-Kutta method. Then, we apply the scattering-integral method for calculating travel time and amplitude sensitivity kernels with respect to velocity and attenuation structures. For each source, we carry out one forward simulation and save the time-dependent strain tensor. For each station, we carry out three `backward' simulations for the three components and save the corresponding strain tensors. The sensitivity kernels at each point in the medium are the convolution of the two sets of the strain tensors. Finally, we show several synthetic tests to verify the effectiveness of the strong stability preserving Runge-Kutta method in generating accurate synthetics in full waveform modeling, and in generating accurate strain tensors for calculating sensitivity kernels at regional and global scales.

  17. The intrinsic matter bispectrum in ΛCDM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tram, Thomas; Crittenden, Robert; Koyama, Kazuya

    2016-05-01

    We present a fully relativistic calculation of the matter bispectrum at second order in cosmological perturbation theory assuming a Gaussian primordial curvature perturbation. For the first time we perform a full numerical integration of the bispectrum for both baryons and cold dark matter using the second-order Einstein-Boltzmann code, SONG. We review previous analytical results and provide an improved analytic approximation for the second-order kernel in Poisson gauge which incorporates Newtonian nonlinear evolution, relativistic initial conditions, the effect of radiation at early times and the cosmological constant at late times. Our improved kernel provides a percent level fit to the fullmore » numerical result at late times for most configurations, including both equilateral shapes and the squeezed limit. We show that baryon acoustic oscillations leave an imprint in the matter bispectrum, making a significant impact on squeezed shapes.« less

  18. Online polarimetry of the Nuclotron internal deuteron and proton beams

    NASA Astrophysics Data System (ADS)

    Isupov, A. Yu

    2017-12-01

    The spin studies at Nuclotron require fast and precise determination of the deuteron and proton beam polarization. For these purposes new powerful VME-based data acquisition (DAQ) system has been designed for the Deuteron Spin Structure setup placed at the Nuclotron Internal Target Station. The DAQ system is built using the netgraph-based data acquisition and processing framework ngdp. The software dealing with VME hardware is a set of netgraph nodes in form of the loadable kernel modules, so works in the operating system kernel context. The specific for current implementation nodes and user context utilities are described. The online events representation by ROOT classes allows us to generalize code for histograms filling and polarization calculations. The DAQ system was successfully used during 53rd and 54th Nuclotron runs, and their suitability for online polarimetry is demonstrated.

  19. An efficient parallel algorithm for matrix-vector multiplication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in themore » well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.« less

  20. Calculation of Organ Doses for a Large Number of Patients Undergoing CT Examinations.

    PubMed

    Bahadori, Amir; Miglioretti, Diana; Kruger, Randell; Flynn, Michael; Weinmann, Sheila; Smith-Bindman, Rebecca; Lee, Choonsik

    2015-10-01

    The objective of our study was to develop an automated calculation method to provide organ dose assessment for a large cohort of pediatric and adult patients undergoing CT examinations. We adopted two dose libraries that were previously published: the volume CT dose index-normalized organ dose library and the tube current-exposure time product (100 mAs)-normalized weighted CT dose index library. We developed an algorithm to calculate organ doses using the two dose libraries and the CT parameters available from DICOM data. We calculated organ doses for pediatric (n = 2499) and adult (n = 2043) CT examinations randomly selected from four health care systems in the United States and compared the adult organ doses with the values calculated from the ImPACT calculator. The median brain dose was 20 mGy (pediatric) and 24 mGy (adult), and the brain dose was greater than 40 mGy for 11% (pediatric) and 18% (adult) of the head CT studies. Both the National Cancer Institute (NCI) and ImPACT methods provided similar organ doses (median discrepancy < 20%) for all organs except the organs located close to the scanning boundaries. The visual comparisons of scanning coverage and phantom anatomies revealed that the NCI method, which is based on realistic computational phantoms, provides more accurate organ doses than the ImPACT method. The automated organ dose calculation method developed in this study reduces the time needed to calculate doses for a large number of patients. We have successfully used this method for a variety of CT-related studies including retrospective epidemiologic studies and CT dose trend analysis studies.

  1. Lung dynamic MRI deblurring using low-rank decomposition and dictionary learning.

    PubMed

    Gou, Shuiping; Wang, Yueyue; Wu, Jiaolong; Lee, Percy; Sheng, Ke

    2015-04-01

    Lung dynamic MRI (dMRI) has emerged to be an appealing tool to quantify lung motion for both planning and treatment guidance purposes. However, this modality can result in blurry images due to intrinsically low signal-to-noise ratio in the lung and spatial/temporal interpolation. The image blurring could adversely affect the image processing that depends on the availability of fine landmarks. The purpose of this study is to reduce dMRI blurring using image postprocessing. To enhance the image quality and exploit the spatiotemporal continuity of dMRI sequences, a low-rank decomposition and dictionary learning (LDDL) method was employed to deblur lung dMRI and enhance the conspicuity of lung blood vessels. Fifty frames of continuous 2D coronal dMRI frames using a steady state free precession sequence were obtained from five subjects including two healthy volunteer and three lung cancer patients. In LDDL, the lung dMRI was decomposed into sparse and low-rank components. Dictionary learning was employed to estimate the blurring kernel based on the whole image, low-rank or sparse component of the first image in the lung MRI sequence. Deblurring was performed on the whole image sequences using deconvolution based on the estimated blur kernel. The deblurring results were quantified using an automated blood vessel extraction method based on the classification of Hessian matrix filtered images. Accuracy of automated extraction was calculated using manual segmentation of the blood vessels as the ground truth. In the pilot study, LDDL based on the blurring kernel estimated from the sparse component led to performance superior to the other ways of kernel estimation. LDDL consistently improved image contrast and fine feature conspicuity of the original MRI without introducing artifacts. The accuracy of automated blood vessel extraction was on average increased by 16% using manual segmentation as the ground truth. Image blurring in dMRI images can be effectively reduced using a low-rank decomposition and dictionary learning method using kernels estimated by the sparse component.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwai, P; Lins, L Nadler

    Purpose: There is a lack of studies with significant cohort data about patients using pacemaker (PM), implanted cardioverter defibrillator (ICD) or cardiac resynchronization therapy (CRT) device undergoing radiotherapy. There is no literature comparing the cumulative doses delivered to those cardiac implanted electronic devices (CIED) calculated by different algorithms neither studies comparing doses with heterogeneity correction or not. The aim of this study was to evaluate the influence of the algorithms Pencil Beam Convolution (PBC), Analytical Anisotropic Algorithm (AAA) and Acuros XB (AXB) as well as heterogeneity correction on risk categorization of patients. Methods: A retrospective analysis of 19 3DCRT ormore » IMRT plans of 17 patients was conducted, calculating the dose delivered to CIED using three different calculation algorithms. Doses were evaluated with and without heterogeneity correction for comparison. Risk categorization of the patients was based on their CIED dependency and cumulative dose in the devices. Results: Total estimated doses at CIED calculated by AAA or AXB were higher than those calculated by PBC in 56% of the cases. In average, the doses at CIED calculated by AAA and AXB were higher than those calculated by PBC (29% and 4% higher, respectively). The maximum difference of doses calculated by each algorithm was about 1 Gy, either using heterogeneity correction or not. Values of maximum dose calculated with heterogeneity correction showed that dose at CIED was at least equal or higher in 84% of the cases with PBC, 77% with AAA and 67% with AXB than dose obtained with no heterogeneity correction. Conclusion: The dose calculation algorithm and heterogeneity correction did not change the risk categorization. Since higher estimated doses delivered to CIED do not compromise treatment precautions to be taken, it’s recommend that the most sophisticated algorithm available should be used to predict dose at the CIED using heterogeneity correction.« less

  3. On the thermodynamic properties of thermal plasma in the flame kernel of hydrocarbon/air premixed gases

    NASA Astrophysics Data System (ADS)

    Askari, Omid; Beretta, Gian Paolo; Eisazadeh-Far, Kian; Metghalchi, Hameed

    2016-07-01

    Thermodynamic properties of hydrocarbon/air plasma mixtures at ultra-high temperatures must be precisely calculated due to important influence on the flame kernel formation and propagation in combusting flows and spark discharge applications. A new algorithm based on the complete chemical equilibrium assumption is developed to calculate the ultra-high temperature plasma composition and thermodynamic properties, including enthalpy, entropy, Gibbs free energy, specific heat at constant pressure, specific heat ratio, speed of sound, mean molar mass, and degree of ionization. The method is applied to compute the thermodynamic properties of H2/air and CH4/air plasma mixtures for different temperatures (1000-100 000 K), different pressures (10-6-100 atm), and different fuel/air equivalence ratios within flammability limit. In calculating the individual thermodynamic properties of the atomic species needed to compute the complete equilibrium composition, the Debye-Huckel cutoff criterion has been used for terminating the series expression of the electronic partition function so as to capture the reduction of the ionization potential due to pressure and the intense connection between the electronic partition function and the thermodynamic properties of the atomic species and the number of energy levels taken into account. Partition functions have been calculated using tabulated data for available atomic energy levels. The Rydberg and Ritz extrapolation and interpolation laws have been used for energy levels which are not observed. The calculated plasma properties are then presented as functions of temperature, pressure and equivalence ratio, in terms of a new set of thermodynamically self-consistent correlations that are shown to provide very accurate fits suitable for efficient use in CFD simulations. Comparisons with existing data for air plasma show excellent agreement.

  4. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  5. Considerations for applying VARSKIN mod 2 to skin dose calculations averaged over 10 cm2.

    PubMed

    Durham, James S

    2004-02-01

    VARSKIN Mod 2 is a DOS-based computer program that calculates the dose to skin from beta and gamma contamination either directly on skin or on material in contact with skin. The default area for calculating the dose is 1 cm2. Recently, the U.S. Nuclear Regulatory Commission issued new guidelines for calculating shallow dose equivalent from skin contamination that requires the dose be averaged over 10 cm2. VARSKIN Mod 2 was not filly designed to calculate beta or gamma dose estimates averaged over 10 cm2, even though the program allows the user to calculate doses averaged over 10 cm2. This article explains why VARSKIN Mod 2 overestimates the beta dose when applied to 10 cm2 areas, describes a manual method for correcting the overestimate, and explains how to perform reasonable gamma dose calculations averaged over 10 cm2. The article also describes upgrades underway in Varskin 3.

  6. Green's functions for dislocations in bonded strips and related crack problems

    NASA Technical Reports Server (NTRS)

    Ballarini, R.; Luo, H. A.

    1990-01-01

    Green's functions are derived for the plane elastostatics problem of a dislocation in a bimaterial strip. Using these fundamental solutions as kernels, various problems involving cracks in a bimaterial strip are analyzed using singular integral equations. For each problem considered, stress intensity factors are calculated for several combinations of the parameters which describe loading, geometry and material mismatch.

  7. Macroscopic and microscopic components of exchange-correlation interactions

    NASA Astrophysics Data System (ADS)

    Sottile, F.; Karlsson, K.; Reining, L.; Aryasetiawan, F.

    2003-11-01

    We consider two commonly used approaches for the ab initio calculation of optical-absorption spectra, namely, many-body perturbation theory based on Green’s functions and time-dependent density-functional theory (TDDFT). The former leads to the two-particle Bethe-Salpeter equation that contains a screened electron-hole interaction. We approximate this interaction in various ways, and discuss in particular the results obtained for a local contact potential. This, in fact, allows us to straightforwardly make the link to the TDDFT approach, and to discuss the exchange-correlation kernel fxc that corresponds to the contact exciton. Our main results, illustrated in the examples of bulk silicon, GaAs, argon, and LiF, are the following. (i) The simple contact exciton model, used on top of an ab initio calculated band structure, yields reasonable absorption spectra. (ii) Qualitatively extremely different fxc can be derived approximatively from the same Bethe-Salpeter equation. These kernels can however yield very similar spectra. (iii) A static fxc, both with or without a long-range component, can create transitions in the quasiparticle gap. To the best of our knowledge, this is the first time that TDDFT has been shown to be able to reproduce bound excitons.

  8. Preliminary scattering kernels for ethane and triphenylmethane at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Cantargi, F.; Granada, J. R.; Damián, J. I. Márquez

    2017-09-01

    Two potential cold moderator materials were studied: ethane and triphenylmethane. The first one, ethane (C2H6), is an organic compound which is very interesting from the neutronic point of view, in some respects better than liquid methane to produce subthermal neutrons, not only because it remains in liquid phase through a wider temperature range (Tf = 90.4 K, Tb = 184.6 K), but also because of its high protonic density together with its frequency spectrum with a low rotational energy band. Another material, Triphenylmethane is an hydrocarbon with formula C19H16 which has already been proposed as a good candidate for a cold moderator. Following one of the main research topics of the Neutron Physics Department of Centro Atómico Bariloche, we present here two ways to estimate the frequency spectrum which is needed to feed the NJOY nuclear data processing system in order to generate the scattering law of each desired material. For ethane, computer simulations of molecular dynamics were done, while for triphenylmethane existing experimental and calculated data were used to produce a new scattering kernel. With these models, cross section libraries were generated, and applied to neutron spectra calculation.

  9. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    PubMed

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreiner, S.; Paschal, C.B.; Galloway, R.L.

    Four methods of producing maximum intensity projection (MIP) images were studied and compared. Three of the projection methods differ in the interpolation kernel used for ray tracing. The interpolation kernels include nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation. The fourth projection method is a voxel projection method that is not explicitly a ray-tracing technique. The four algorithms` performance was evaluated using a computer-generated model of a vessel and using real MR angiography data. The evaluation centered around how well an algorithm transferred an object`s width to the projection plane. The voxel projection algorithm does not suffer from artifactsmore » associated with the nearest neighbor algorithm. Also, a speed-up in the calculation of the projection is seen with the voxel projection method. Linear interpolation dramatically improves the transfer of width information from the 3D MRA data set over both nearest neighbor and voxel projection methods. Even though the cubic convolution interpolation kernel is theoretically superior to the linear kernel, it did not project widths more accurately than linear interpolation. A possible advantage to the nearest neighbor interpolation is that the size of small vessels tends to be exaggerated in the projection plane, thereby increasing their visibility. The results confirm that the way in which an MIP image is constructed has a dramatic effect on information contained in the projection. The construction method must be chosen with the knowledge that the clinical information in the 2D projections in general will be different from that contained in the original 3D data volume. 27 refs., 16 figs., 2 tabs.« less

  11. Testing in Microbiome-Profiling Studies with MiRKAT, the Microbiome Regression-Based Kernel Association Test

    PubMed Central

    Zhao, Ni; Chen, Jun; Carroll, Ian M.; Ringel-Kulka, Tamar; Epstein, Michael P.; Zhou, Hua; Zhou, Jin J.; Ringel, Yehuda; Li, Hongzhe; Wu, Michael C.

    2015-01-01

    High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Distance-based analysis is a popular strategy for evaluating the overall association between microbiome diversity and outcome, wherein the phylogenetic distance between individuals’ microbiome profiles is computed and tested for association via permutation. Despite their practical popularity, distance-based approaches suffer from important challenges, especially in selecting the best distance and extending the methods to alternative outcomes, such as survival outcomes. We propose the microbiome regression-based kernel association test (MiRKAT), which directly regresses the outcome on the microbiome profiles via the semi-parametric kernel machine regression framework. MiRKAT allows for easy covariate adjustment and extension to alternative outcomes while non-parametrically modeling the microbiome through a kernel that incorporates phylogenetic distance. It uses a variance-component score statistic to test for the association with analytical p value calculation. The model also allows simultaneous examination of multiple distances, alleviating the problem of choosing the best distance. Our simulations demonstrated that MiRKAT provides correctly controlled type I error and adequate power in detecting overall association. “Optimal” MiRKAT, which considers multiple candidate distances, is robust in that it suffers from little power loss in comparison to when the best distance is used and can achieve tremendous power gain in comparison to when a poor distance is chosen. Finally, we applied MiRKAT to real microbiome datasets to show that microbial communities are associated with smoking and with fecal protease levels after confounders are controlled for. PMID:25957468

  12. QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.

    2013-11-01

    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).

  13. Monte Carlo evaluation of RapidArc™ oropharynx treatment planning strategies for sparing of midline structures

    NASA Astrophysics Data System (ADS)

    Bush, K.; Zavgorodni, S.; Gagne, I.; Townson, R.; Ansbacher, W.; Beckham, W.

    2010-08-01

    The aim of the study was to perform the Monte Carlo (MC) evaluation of RapidArc™ (Varian Medical Systems, Palo Alto, CA) dose calculations for four oropharynx midline sparing planning strategies. Six patients with squamous cell cancer of the oropharynx were each planned with four RapidArc head and neck treatment strategies consisting of single and double photon arcs. In each case, RTOG0522 protocol objectives were used during planning optimization. Dose calculations performed with the analytical anisotropic algorithm (AAA) are compared against BEAMnrc/DOSXYZnrc dose calculations for the 24-plan dataset. Mean dose and dose-to-98%-of-structure-volume (D98%) were used as metrics in the evaluation of dose to planning target volumes (PTVs). Mean dose and dose-to-2%-of-structure-volume (D2%) were used to evaluate dose differences within organs at risk (OAR). Differences in the conformity index (CI) and the homogeneity index (HI) as well as 3D dose distributions were also observed. AAA calculated PTV mean dose, D98%, and HIs showed very good agreement with MC dose calculations within the 0.8% MC (statistical) calculation uncertainty. Regional node volume (PTV-80%) mean dose and D98% were found to be overestimated (1.3%, σ = 0.8% and 2.3%, σ = 0.8%, respectively) by the AAA with respect to MC calculations. Mean dose and D2% to OAR were also observed to be consistently overestimated by the AAA. Increasing dose calculation differences were found in planning strategies exhibiting a higher overall fluence modulation. From the plan dataset, the largest local dose differences were observed in heavily shielded regions and within the esophageal and sinus cavities. AAA dose calculations as implemented in RapidArc™ demonstrate excellent agreement with MC calculations in unshielded regions containing moderate inhomogeneities. Acceptable agreement is achieved in regions of increased MLC shielding. Differences in dose are attributed to inaccuracies in the AAA-modulated fluence modeling, modeling of material inhomogeneities and dose deposition within low-density materials. The use of MC dose calculations leads to the same general conclusion as using AAA that a two arc delivery with limited collimator opening can provide the greatest amount of midline sparing compared to the other techniques investigated.

  14. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  15. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  17. Influence of dose calculation algorithms on the predicted dose distribution and NTCP values for NSCLC patients.

    PubMed

    Nielsen, Tine B; Wieslander, Elinore; Fogliata, Antonella; Nielsen, Morten; Hansen, Olfred; Brink, Carsten

    2011-05-01

    To investigate differences in calculated doses and normal tissue complication probability (NTCP) values between different dose algorithms. Six dose algorithms from four different treatment planning systems were investigated: Eclipse AAA, Oncentra MasterPlan Collapsed Cone and Pencil Beam, Pinnacle Collapsed Cone and XiO Multigrid Superposition, and Fast Fourier Transform Convolution. Twenty NSCLC patients treated in the period 2001-2006 at the same accelerator were included and the accelerator used for treatments were modeled in the different systems. The treatment plans were recalculated with the same number of monitor units and beam arrangements across the dose algorithms. Dose volume histograms of the GTV, PTV, combined lungs (excluding the GTV), and heart were exported and evaluated. NTCP values for heart and lungs were calculated using the relative seriality model and the LKB model, respectively. Furthermore, NTCP for the lungs were calculated from two different model parameter sets. Calculations and evaluations were performed both including and excluding density corrections. There are found statistical significant differences between the calculated dose to heart, lung, and targets across the algorithms. Mean lung dose and V20 are not very sensitive to change between the investigated dose calculation algorithms. However, the different dose levels for the PTV averaged over the patient population are varying up to 11%. The predicted NTCP values for pneumonitis vary between 0.20 and 0.24 or 0.35 and 0.48 across the investigated dose algorithms depending on the chosen model parameter set. The influence of the use of density correction in the dose calculation on the predicted NTCP values depends on the specific dose calculation algorithm and the model parameter set. For fixed values of these, the changes in NTCP can be up to 45%. Calculated NTCP values for pneumonitis are more sensitive to the choice of algorithm than mean lung dose and V20 which are also commonly used for plan evaluation. The NTCP values for heart complication are, in this study, not very sensitive to the choice of algorithm. Dose calculations based on density corrections result in quite different NTCP values than calculations without density corrections. It is therefore important when working with NTCP planning to use NTCP parameter values based on calculations and treatments similar to those for which the NTCP is of interest.

  18. Patient-specific CT dosimetry calculation: a feasibility study.

    PubMed

    Fearon, Thomas; Xie, Huchen; Cheng, Jason Y; Ning, Holly; Zhuge, Ying; Miller, Robert W

    2011-11-15

    Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of "standard man". Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient-specific CT dosimetry. A radiation treatment planning system was modified to calculate patient-specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose-volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi-empirical, measured correction-based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point-by-point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%-20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient-specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation.

  19. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  20. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  1. Improving spot-scanning proton therapy patient specific quality assurance with HPlusQA, a second-check dose calculation engine.

    PubMed

    Mackin, Dennis; Li, Yupeng; Taylor, Michael B; Kerr, Matthew; Holmes, Charles; Sahoo, Narayan; Poenisch, Falk; Li, Heng; Lii, Jim; Amos, Richard; Wu, Richard; Suzuki, Kazumichi; Gillin, Michael T; Zhu, X Ronald; Zhang, Xiaodong

    2013-12-01

    The purpose of this study was to validate the use of HPlusQA, spot-scanning proton therapy (SSPT) dose calculation software developed at The University of Texas MD Anderson Cancer Center, as second-check dose calculation software for patient-specific quality assurance (PSQA). The authors also showed how HPlusQA can be used within the current PSQA framework. The authors compared the dose calculations of HPlusQA and the Eclipse treatment planning system with 106 planar dose measurements made as part of PSQA. To determine the relative performance and the degree of correlation between HPlusQA and Eclipse, the authors compared calculated with measured point doses. Then, to determine how well HPlusQA can predict when the comparisons between Eclipse calculations and the measured dose will exceed tolerance levels, the authors compared gamma index scores for HPlusQA versus Eclipse with those of measured doses versus Eclipse. The authors introduce the αβγ transformation as a way to more easily compare gamma scores. The authors compared measured and calculated dose planes using the relative depth, z∕R × 100%, where z is the depth of the measurement and R is the proton beam range. For relative depths than less than 80%, both Eclipse and HPlusQA calculations were within 2 cGy of dose measurements on average. When the relative depth was greater than 80%, the agreement between the calculations and measurements fell to 4 cGy. For relative depths less than 10%, the Eclipse and HPlusQA dose discrepancies showed a negative correlation, -0.21. Otherwise, the correlation between the dose discrepancies was positive and as large as 0.6. For the dose planes in this study, HPlusQA correctly predicted when Eclipse had and had not calculated the dose to within tolerance 92% and 79% of the time, respectively. In 4 of 106 cases, HPlusQA failed to predict when the comparison between measurement and Eclipse's calculation had exceeded the tolerance levels of 3% for dose and 3 mm for distance-to-agreement. The authors found HPlusQA to be reasonably effective (79% ± 10%) in determining when the comparison between measured dose planes and the dose planes calculated by the Eclipse treatment planning system had exceeded the acceptable tolerance levels. When used as described in this study, HPlusQA can reduce the need for patient specific quality assurance measurements by 64%. The authors believe that the use of HPlusQA as a dose calculation second check can increase the efficiency and effectiveness of the QA process.

  2. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  3. Influence of CT contrast agent on dose calculation of intensity modulated radiation therapy plan for nasopharyngeal carcinoma.

    PubMed

    Lee, F K-H; Chan, C C-L; Law, C-K

    2009-02-01

    Contrast enhanced computed tomography (CECT) has been used for delineation of treatment target in radiotherapy. The different Hounsfield unit due to the injected contrast agent may affect radiation dose calculation. We investigated this effect on intensity modulated radiotherapy (IMRT) of nasopharyngeal carcinoma (NPC). Dose distributions of 15 IMRT plans were recalculated on CECT. Dose statistics for organs at risk (OAR) and treatment targets were recorded for the plain CT-calculated and CECT-calculated plans. Statistical significance of the differences was evaluated. Correlations were also tested, among magnitude of calculated dose difference, tumor size and level of enhancement contrast. Differences in nodal mean/median dose were statistically significant, but small (approximately 0.15 Gy for a 66 Gy prescription). In the vicinity of the carotid arteries, the difference in calculated dose was also statistically significant, but only with a mean of approximately 0.2 Gy. We did not observe any significant correlation between the difference in the calculated dose and the tumor size or level of enhancement. The results implied that the calculated dose difference was clinically insignificant and may be acceptable for IMRT planning.

  4. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  5. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...

  6. Determination of the spatial resolution required for the HEDR dose code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, B.A.; Simpson, J.C.

    1992-12-01

    A series of scoping calculations has been undertaken to evaluate the doses that may have been received by individuals living in the vicinity of the Hanford site. This scoping calculation (Calculation 007) examined the spatial distribution of potential doses resulting from releases in the year 1945. This study builds on the work initiated in the first scoping calculation, of iodine in cow's milk; the third scoping calculation, which added additional pathways; the fifth calculation, which addressed the uncertainty of the dose estimates at a point; and the sixth calculation, which extrapolated the doses throughout the atmospheric transport domain. A projectionmore » of dose to representative individuals throughout the proposed HEDR atmospheric transport domain was prepared on the basis of the HEDR source term. Addressed in this calculation were the contributions to iodine-131 thyroid dose of infants from (1) air submersion and groundshine external dose, (2) inhalation, (3) ingestion of soil by humans, (4) ingestion of leafy vegetables, (5) ingestion of other vegetables and fruits, (6) ingestion of meat, (7) ingestion of eggs, and (8) ingestion of cows' milk from-Feeding Regime 1 as described in scoping calculation 001.« less

  7. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  8. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  9. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...

  10. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  11. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  12. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  13. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  14. Modeling adaptive kernels from probabilistic phylogenetic trees.

    PubMed

    Nicotra, Luca; Micheli, Alessio

    2009-01-01

    Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.

  15. Aflatoxin and nutrient contents of peanut collected from local market and their processed foods

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Rahmianna, A. A.; Yusnawan, E.

    2018-01-01

    Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.

  16. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  17. Dose computation for therapeutic electron beams

    NASA Astrophysics Data System (ADS)

    Glegg, Martin Mackenzie

    The accuracy of electron dose calculations performed by two commercially available treatment planning computers, Varian Cadplan and Helax TMS, has been assessed. Measured values of absorbed dose delivered by a Varian 2100C linear accelerator, under a wide variety of irradiation conditions, were compared with doses calculated by the treatment planning computers. Much of the motivation for this work was provided by a requirement to verify the accuracy of calculated electron dose distributions in situations encountered clinically at Glasgow's Beatson Oncology Centre. Calculated dose distributions are required in a significant minority of electron treatments, usually in cases involving treatment to the head and neck. Here, therapeutic electron beams are subject to factors which may cause non-uniformity in the distribution of dose, and which may complicate the calculation of dose. The beam shape is often irregular, the beam may enter the patient at an oblique angle or at an extended source to skin distance (SSD), tissue inhomogeneities can alter the dose distribution, and tissue equivalent material (such as wax) may be added to reduce dose to critical organs. Technological advances have allowed the current generation of treatment planning computers to implement dose calculation algorithms with the ability to model electron beams in these complex situations. These calculations have, however, yet to be verified by measurement. This work has assessed the accuracy of calculations in a number of specific instances. Chapter two contains a comparison of measured and calculated planar electron isodose distributions. Three situations were considered: oblique incidence, incidence on an irregular surface (such as that which would be arise from the use of wax to reduce dose to spinal cord), and incidence on a phantom containing a small air cavity. Calculations were compared with measurements made by thermoluminescent dosimetry (TLD) in a WTe electron solid water phantom. Chapter three assesses the planning computers' ability to model electron beam penumbra at extended SSD. Calculations were compared with diode measurements in a water phantom. Further measurements assessed doses in the junction region produced by abutting an extended SSD electron field with opposed photon fields. Chapter four describes an investigation of the size and shape of the region enclosed by the 90% isodose line when produced by limiting the electron beam with square and elliptical apertures. The 90% isodose line was chosen because clinical treatments are often prescribed such that a given volume receives at least 90% dose. Calculated and measured dose distributions were compared in a plane normal to the beam central axis. Measurements were made by film dosimetry. While chapters two to four examine relative doses, chapter five assesses the accuracy of absolute dose (or output) calculations performed by the planning computers. Output variation with SSD and field size was examined. Two further situations already assessed for the distribution of relative dose were also considered: an obliquely incident field, and a field incident on an irregular surface. The accuracy of calculations was assessed against criteria stipulated by the International Commission on Radiation Units and Measurement (ICRU). The Varian Cadplan and Helax TMS treatment planning systems produce acceptable accuracy in the calculation of relative dose from therapeutic electron beams in most commonly encountered situations. When interpreting clinical dose distributions, however, knowledge of the limitations of the calculation algorithm employed by each system is required in order to identify the minority of situations where results are not accurate. The calculation of absolute dose is too inaccurate to implement in a clinical environment. (Abstract shortened by ProQuest.).

  18. Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-05-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Effect of one step KOH activation and CaO modified carbon in transesterification reaction

    NASA Astrophysics Data System (ADS)

    Yacob, Abd Rahim; Zaki, Muhammad Azam Muhammad

    2017-11-01

    In this work, one step activation was introduced using potassium hydroxide (KOH) and calcium oxide (CaO) modified palm kernel shells. Various concentration of calcium oxide was used as catalyst while maintaining the same concentration of potassium hydroxide to activate and impregnate the palm kernel shell before calcined at 500°C for 5 hours. All the prepared samples were characterized using Fourier Transform Infrared (FTIR) and Field Emission Scanning Electron Microscope (FESEM). FTIR analysis of raw palm kernel shell showed the presence of various functional groups. However, after activation, most of the functional groups were eliminated. The basic strength of the prepared samples were determined using back titration method. The samples were then used as base heterogeneous catalyst for the transesterification reaction of rice bran oil with methanol. Analysis of the products were performed using Gas Chromatography Flame Ionization Detector (GC-FID) to calculate the percentage conversion of the biodiesel products. This study shows, as the percentage of one step activation potassium and calcium oxide doped carbon increases thus, the basic strength also increases followed by the increase in biodiesel production. Optimization study shows that the optimum biodiesel production was at 8 wt% catalyst loading, 9:1 methanol: oil molar ratio at 65°C and 6 hours which gives a conversion up to 95%.

  20. Genome-wide association analysis identifies loci governing mercury accumulation in maize.

    PubMed

    Zhao, Zhan; Fu, Zhongjun; Lin, Yanan; Chen, Hao; Liu, Kun; Xing, Xiaolong; Liu, Zonghua; Li, Weihua; Tang, Jihua

    2017-03-21

    Owing to the rapid development of urbanisation and industrialisation, heavy metal pollution has become a widespread environmental problem. Maize planted on mercury (Hg)-polluted soil can absorb and accumulate Hg in its edible parts, posing a potential threat to human health. To understand the genetic mechanism of Hg accumulation in maize, we performed a genome-wide association study using a mixed linear model on an association population consisting of 230 maize inbred lines with abundant genetic variation. The order of relative Hg concentrations in different maize tissues was as follows: leaves > bracts > stems > axes > kernels. Combined two locations, a total of 37 significant single-nucleotide polymorphisms (SNPs) associated with kernels, 12 with axes, 13 with stems, 27 with bracts and 23 with leaves were detected with p < 0.0001. Each significant SNP was calculated and the SNPs significant associated with kernels, axes, stems, bracts and leaves explained 6.96%-10.56%, 7.19%-15.87%, 7.11%-10.19%, 7.16%-8.71% and 6.91%-9.17% of the phenotypic variation, respectively. Among the significant SNPs, nine co-localised with previously detected quantitative trait loci. This study will aid in the selection of Hg-accumulation inbred lines that satisfy the needs for pollution-safe cultivars and maintaining maize production.

  1. Dose calculation and verification of the Vero gimbal tracking treatment delivery

    NASA Astrophysics Data System (ADS)

    Prasetio, H.; Wölfelschneider, J.; Ziegler, M.; Serpa, M.; Witulla, B.; Bert, C.

    2018-02-01

    The Vero linear accelerator delivers dynamic tumor tracking (DTT) treatment using a gimbal motion. However, the availability of treatment planning systems (TPS) to simulate DTT is limited. This study aims to implement and verify the gimbal tracking beam geometry in the dose calculation. Gimbal tracking was implemented by rotating the reference CT outside the TPS according to the ring, gantry, and gimbal tracking position obtained from the tracking log file. The dose was calculated using these rotated CTs. The geometric accuracy was verified by comparing calculated and measured film response using a ball bearing phantom. The dose was verified by comparing calculated 2D dose distributions and film measurements in a ball bearing and a homogeneous phantom using a gamma criterion of 2%/2 mm. The effect of implementing the gimbal tracking beam geometry in a 3D patient data dose calculation was evaluated using dose volume histograms (DVH). Geometrically, the gimbal tracking implementation accuracy was  <0.94 mm. The isodose lines agreed with the film measurement. The largest dose difference of 9.4% was observed at maximum tilt positions with an isocenter and target separation of 17.51 mm. Dosimetrically, gamma passing rates were  >98.4%. The introduction of the gimbal tracking beam geometry in the dose calculation shifted the DVH curves by 0.05%-1.26% for the phantom geometry and by 5.59% for the patient CT dataset. This study successfully demonstrates a method to incorporate the gimbal tracking beam geometry into dose calculations. By combining CT rotation and MU distribution according to the log file, the TPS was able to simulate the Vero tracking treatment dose delivery. The DVH analysis from the gimbal tracking dose calculation revealed changes in the dose distribution during gimbal DTT that are not visible with static dose calculations.

  2. Acceleration of intensity-modulated radiotherapy dose calculation by importance sampling of the calculation matrices.

    PubMed

    Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas

    2002-05-01

    In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan.

  3. Reactive Collisions and Final State Analysis in Hypersonic Flight Regime

    DTIC Science & Technology

    2016-09-13

    Kelvin.[7] The gas-phase, surface reactions and energy transfer at these tempera- tures are essentially uncharacterized and the experimental methodologies...high temperatures (1000 to 20000 K) and compared with results from experimentally derived thermodynamics quantities from the NASA CEA (NASA Chemical...with a reproducing kernel Hilbert space (RKHS) method[13] combined with Legendre polynomials; (2) quasi classical trajectory (QCT) calculations to study

  4. Ion Channel Conductance Measurements on a Silicon-Based Platform

    DTIC Science & Technology

    2006-01-01

    calculated using the molecular dynamics code, GROMACS . Reasonable agreement is obtained in the simulated versus measured conductance over the range of...measurements of the lipid giga-seal characteristics have been performed, including AC conductance measurements and statistical analysis in order to...Dynamics kernel self-consistently coupled to Poisson equations using a P3M force field scheme and the GROMACS description of protein structure and

  5. IR-Improved DGLAP-CS Theory

    DOE PAGES

    Ward, B. F. L.

    2008-01-01

    We show that it is possible to improve the infrared aspects of the standard treatment of the DGLAP-CS evolution theory to take into account a large class of higher-order corrections that significantly improve the precision of the theory for any given level of fixed-order calculation of its respective kernels. We illustrate the size of the effects we resum using the moments of the parton distributions.

  6. Optimization of the kernel functions in a probabilistic neural network analyzing the local pattern distribution.

    PubMed

    Galleske, I; Castellanos, J

    2002-05-01

    This article proposes a procedure for the automatic determination of the elements of the covariance matrix of the gaussian kernel function of probabilistic neural networks. Two matrices, a rotation matrix and a matrix of variances, can be calculated by analyzing the local environment of each training pattern. The combination of them will form the covariance matrix of each training pattern. This automation has two advantages: First, it will free the neural network designer from indicating the complete covariance matrix, and second, it will result in a network with better generalization ability than the original model. A variation of the famous two-spiral problem and real-world examples from the UCI Machine Learning Repository will show a classification rate not only better than the original probabilistic neural network but also that this model can outperform other well-known classification techniques.

  7. Quantitative evaluation of first-order retardation corrections to the quarkonium spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brambilla, N.; Prosperi, G.M.

    1992-08-01

    We evaluate numerically first-order retardation corrections for some charmonium and bottomonium masses under the usual assumption of a Bethe-Salpeter purely scalar confinement kernel. The result depends strictly on the use of an additional effective potential to express the corrections (rather than to resort to Kato perturbation theory) and on an appropriate regularization prescription. The kernel has been chosen in order to reproduce in the instantaneous approximation a semirelativistic potential suggested by the Wilson loop method. The calculations are performed for two sets of parameters determined by fits in potential theory. The corrections turn out to be typically of the ordermore » of a few hundred MeV and depend on an additional scale parameter introduced in the regularization. A conjecture existing in the literature on the origin of the constant term in the potential is also discussed.« less

  8. A new treatment of nonlocality in scattering process

    NASA Astrophysics Data System (ADS)

    Upadhyay, N. J.; Bhagwat, A.; Jain, B. K.

    2018-01-01

    Nonlocality in the scattering potential leads to an integro-differential equation. In this equation nonlocality enters through an integral over the nonlocal potential kernel. The resulting Schrödinger equation is usually handled by approximating r,{r}{\\prime }-dependence of the nonlocal kernel. The present work proposes a novel method to solve the integro-differential equation. The method, using the mean value theorem of integral calculus, converts the nonhomogeneous term to a homogeneous term. The effective local potential in this equation turns out to be energy independent, but has relative angular momentum dependence. This method is accurate and valid for any form of nonlocality. As illustrative examples, the total and differential cross sections for neutron scattering off 12C, 56Fe and 100Mo nuclei are calculated with this method in the low energy region (up to 10 MeV) and are found to be in reasonable accord with the experiments.

  9. A novel equivalent definition of Caputo fractional derivative without singular kernel and superconvergent analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zhengguang; Li, Xiaoli

    2018-05-01

    In this article, we present a new second-order finite difference discrete scheme for a fractal mobile/immobile transport model based on equivalent transformative Caputo formulation. The new transformative formulation takes the singular kernel away to make the integral calculation more efficient. Furthermore, this definition is also effective where α is a positive integer. Besides, the T-Caputo derivative also helps us to increase the convergence rate of the discretization of the α-order(0 < α < 1) Caputo derivative from O(τ2-α) to O(τ3-α), where τ is the time step. For numerical analysis, a Crank-Nicolson finite difference scheme to solve the fractal mobile/immobile transport model is introduced and analyzed. The unconditional stability and a priori estimates of the scheme are given rigorously. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.

  10. Full-wave effects on shear wave splitting

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Pin; Zhao, Li; Hung, Shu-Huei

    2014-02-01

    Seismic anisotropy in the mantle plays an important role in our understanding of the Earth's internal dynamics, and shear wave splitting has always been a key observable in the investigation of seismic anisotropy. To date the interpretation of shear wave splitting in terms of anisotropy has been largely based on ray-theoretical modeling of a single vertically incident plane SKS or SKKS wave. In this study, we use sensitivity kernels of shear wave splitting to anisotropic parameters calculated by the normal-mode theory to demonstrate that the interference of SKS with other phases of similar arrival times, near-field effect, and multiple reflections in the crust lead to significant variations of SKS splitting with epicentral distance. The full-wave kernels not only widen the possibilities in the source-receiver geometry in making shear wave splitting measurements but also provide the capability for tomographic inversion to resolve vertical and lateral variations in the anisotropic structures.

  11. Determining the multi-scale hedge ratios of stock index futures using the lower partial moments method

    NASA Astrophysics Data System (ADS)

    Dai, Jun; Zhou, Haigang; Zhao, Shaoquan

    2017-01-01

    This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.

  12. Real-time auto-adaptive margin generation for MLC-tracked radiotherapy

    NASA Astrophysics Data System (ADS)

    Glitzner, M.; Fast, M. F.; de Senneville, B. Denis; Nill, S.; Oelfke, U.; Lagendijk, J. J. W.; Raaymakers, B. W.; Crijns, S. P. M.

    2017-01-01

    In radiotherapy, abdominal and thoracic sites are candidates for performing motion tracking. With real-time control it is possible to adjust the multileaf collimator (MLC) position to the target position. However, positions are not perfectly matched and position errors arise from system delays and complicated response of the electromechanic MLC system. Although, it is possible to compensate parts of these errors by using predictors, residual errors remain and need to be compensated to retain target coverage. This work presents a method to statistically describe tracking errors and to automatically derive a patient-specific, per-segment margin to compensate the arising underdosage on-line, i.e. during plan delivery. The statistics of the geometric error between intended and actual machine position are derived using kernel density estimators. Subsequently a margin is calculated on-line according to a selected coverage parameter, which determines the amount of accepted underdosage. The margin is then applied onto the actual segment to accommodate the positioning errors in the enlarged segment. The proof-of-concept was tested in an on-line tracking experiment and showed the ability to recover underdosages for two test cases, increasing {{V}90 %} in the underdosed area about 47 % and 41 % , respectively. The used dose model was able to predict the loss of dose due to tracking errors and could be used to infer the necessary margins. The implementation had a running time of 23 ms which is compatible with real-time requirements of MLC tracking systems. The auto-adaptivity to machine and patient characteristics makes the technique a generic yet intuitive candidate to avoid underdosages due to MLC tracking errors.

  13. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  14. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  15. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  16. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  17. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  18. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  19. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  20. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  1. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  2. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  3. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  4. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  6. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  7. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  8. Stereotactic, Single-Dose Irradiation of Lung Tumors: A Comparison of Absolute Dose and Dose Distribution Between Pencil Beam and Monte Carlo Algorithms Based on Actual Patient CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Huixiao; Lohr, Frank; Fritz, Peter

    2010-11-01

    Purpose: Dose calculation based on pencil beam (PB) algorithms has its shortcomings predicting dose in tissue heterogeneities. The aim of this study was to compare dose distributions of clinically applied non-intensity-modulated radiotherapy 15-MV plans for stereotactic body radiotherapy between voxel Monte Carlo (XVMC) calculation and PB calculation for lung lesions. Methods and Materials: To validate XVMC, one treatment plan was verified in an inhomogeneous thorax phantom with EDR2 film (Eastman Kodak, Rochester, NY). Both measured and calculated (PB and XVMC) dose distributions were compared regarding profiles and isodoses. Then, 35 lung plans originally created for clinical treatment by PB calculationmore » with the Eclipse planning system (Varian Medical Systems, Palo Alto, CA) were recalculated by XVMC (investigational implementation in PrecisePLAN [Elekta AB, Stockholm, Sweden]). Clinically relevant dose-volume parameters for target and lung tissue were compared and analyzed statistically. Results: The XVMC calculation agreed well with film measurements (<1% difference in lateral profile), whereas the deviation between PB calculation and film measurements was up to +15%. On analysis of 35 clinical cases, the mean dose, minimal dose and coverage dose value for 95% volume of gross tumor volume were 1.14 {+-} 1.72 Gy, 1.68 {+-} 1.47 Gy, and 1.24 {+-} 1.04 Gy lower by XVMC compared with PB, respectively (prescription dose, 30 Gy). The volume covered by the 9 Gy isodose of lung was 2.73% {+-} 3.12% higher when calculated by XVMC compared with PB. The largest differences were observed for small lesions circumferentially encompassed by lung tissue. Conclusions: Pencil beam dose calculation overestimates dose to the tumor and underestimates lung volumes exposed to a given dose consistently for 15-MV photons. The degree of difference between XVMC and PB is tumor size and location dependent. Therefore XVMC calculation is helpful to further optimize treatment planning.« less

  9. Estimating the uncertainty of calculated out-of-field organ dose from a commercial treatment planning system.

    PubMed

    Wang, Lilie; Ding, George X

    2018-06-12

    Therapeutic radiation to cancer patients is accompanied by unintended radiation to organs outside the treatment field. It is known that the model-based dose algorithm has limitation in calculating the out-of-field doses. This study evaluated the out-of-field dose calculated by the Varian Eclipse treatment planning system (v.11 with AAA algorithm) in realistic treatment plans with the goal of estimating the uncertainties of calculated organ doses. Photon beam phase-space files for TrueBeam linear accelerator were provided by Varian. These were used as incident sources in EGSnrc Monte Carlo simulations of radiation transport through the downstream jaws and MLC. Dynamic movements of the MLC leaves were fully modeled based on treatment plans using IMRT or VMAT techniques. The Monte Carlo calculated out-of-field doses were then compared with those calculated by Eclipse. The dose comparisons were performed for different beam energies and treatment sites, including head-and-neck, lung, and pelvis. For 6 MV (FF/FFF), 10 MV (FF/FFF), and 15 MV (FF) beams, Eclipse underestimated out-of-field local doses by 30%-50% compared with Monte Carlo calculations when the local dose was <1% of prescribed dose. The accuracy of out-of-field dose calculations using Eclipse is improved when collimator jaws were set at the smallest possible aperture for MLC openings. The Eclipse system consistently underestimates out-of-field dose by a factor of 2 for all beam energies studied at the local dose level of less than 1% of prescribed dose. These findings are useful in providing information on the uncertainties of out-of-field organ doses calculated by Eclipse treatment planning system. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  10. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  11. Magnetic field of the Earth

    NASA Astrophysics Data System (ADS)

    Popov, Aleksey

    2013-04-01

    The magnetic field of the Earth has global meaning for a life on the Earth. The world geophysical science explains: - occurrence of a magnetic field of the Earth it is transformation of kinetic energy of movements of the fused iron in the liquid core of Earth - into the magnetic energy; - the warming up of a kernel of the Earth occurs due to radioactive disintegration of elements, with excretion of thermal energy. The world science does not define the reasons: - drift of a magnetic dipole on 0,2 a year to the West; - drift of lithospheric slabs and continents. The author offers: an alternative variant existing in a world science the theories "Geodynamo" - it is the theory « the Magnetic field of the Earth », created on the basis of physical laws. Education of a magnetic field of the Earth occurs at moving the electric charge located in a liquid kernel, at rotation of the Earth. At calculation of a magnetic field is used law the Bio Savara for a ring electric current: dB = . Magnetic induction in a kernel of the Earth: B = 2,58 Gs. According to the law of electromagnetic induction the Faradey, rotation of a iron kernel of the Earth in magnetic field causes occurrence of an electric field Emf which moves electrons from the center of a kernel towards the mantle. So of arise the radial electric currents. The magnetic field amplifies the iron of mantle and a kernel of the Earth. As a result of action of a radial electric field the electrons will flow from the center of a kernel in a layer of an electric charge. The central part of a kernel represents the field with a positive electric charge, which creates inverse magnetic field Binv and Emfinv When ?mfinv = ?mf ; ?inv = B, there will be an inversion a magnetic field of the Earth. It is a fact: drift of a magnetic dipole of the Earth in the western direction approximately 0,2 longitude, into a year. Radial electric currents a actions with the basic magnetic field of a Earth - it turn a kernel. It coincides with laws of electromagnetism. According to a rule of the left hand: if the magnetic field in a kernel is directed to drawing, electric current are directed to an axis of rotation of the Earth, - a action of force clockwise (to West). Definition of the force causing drift a kernel according to the law of Ampere F = IBlsin. Powerful force 3,5 × 1012 Nyton, what makes drift of the central part of a kernel of the Earth on 0,2 the longitude in year to West, and also it is engine of the mechanism of movement of slabs together with continents. Movement of a core of the Earth carry out around of a terrestrial axis one circulation in the western direction in 2000 of years. Linear speed of rotation of a kernel concerning a mantle on border the mantle a kernel: V = × 3,471 × 10 = 3,818 × 10 m/s = 33 m/day = 12 km/years. Considering greater viscosity of a mantle, the powerful energy at rotation of a kernel seize a mantle and lithospheric slabs and makes their collisions as a result of which there are earthquakes and volcano. Continents Northern and Southern America every year separate from the Europe and Africa on several centimeters. Atlantic ocean as a result of movement of these slabs with such speed was formed for 200 million years, that in comparison with the age of the Earth - several billions years, not so long time. Drift of a kernel in the western direction is a principal cause of delay of speed of rotation of the Earth. Flow of radial electric currents allot according to the law of Joule - Lenz, the quantity of warmth : Q = I2Rt = IUt, of thermal energy 6,92 × 1017 calories/year. This defines heating of a kernel and the Earth as a whole. In the valley of the median-Atlantic ridge having numerous volcanos, the lava flow constantly thus warm up waters of Atlantic ocean. It is a fact the warm current Gulf Stream. Thawing of a permafrost and ices of Arctic ocean, of glaciers of Greenland and Antarctica is acknowledgement: the warmth of earth defines character of thawing of glaciers and a permafrost. This is a global warming. The version of the author: the periods of inversion of a magnetic field of the Earth determine cycles of the Ice Age. At inversions of a magnetic field when B=0, radial electric currents are small or are absent, excretion of thermal energy minimally or an equal to zero,it is the beginning of the cooling the Earth and offensive of the Ice Age. Disappearance warm current Gulf Stream warming the north of the Europe and Canada. Drift of a magnetic dipole of the Earth in a rotation the opposite to rotation of the Earth, is acknowledgement of drift of a kernel of the Earth in a rotation the opposite to rotation of the Earth and is acknowledgement of the theory « the Magnetic field of the Earth ». The author continues to develop the theory « the Magnetic field of the Earth » and invites geophysicists to accept in it participation in it.

  12. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  13. SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Jiang, S; Lu, W

    Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less

  14. SU-F-T-151: Measurement Evaluation of Skin Dose in Scanning Proton Beam Therapy for Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, J; Nichols, E; Strauss, D

    Purpose: To measure the skin dose and compare it with the calculated dose from a treatment planning system (TPS) for breast cancer treatment using scanning proton beam therapy (SPBT). Methods: A single en-face-beam SPBT plan was generated by a commercial TPS for two breast cancer patients. The treatment volumes were the entire breasts (218 cc and 1500 cc) prescribed to 50.4 Gy (RBE) in 28 fractions. A range shifter of 5 cm water equivalent thickness was used. The organ at risk (skin) was defined to be 5 mm thick from the surface. The skin doses were measured in water withmore » an ADCL calibrated parallel plate (PP) chamber. The measured data were compared with the values calculated in the TPS. Skin dose calculations can be subject to uncertainties created by the definition of the external contour and the limitations of the correction based algorithms, such as proton convolution superposition. Hence, the external contours were expanded by 0, 3 mm and 1 cm to include additional pixels for dose calculation. In addition, to examine the effects of the cloth gown on the skin dose, the skin dose measurements were conducted with and without gown. Results: On average the measured skin dose was 4% higher than the calculated values. At deeper depths, the measured and calculated doses were in better agreement (< 2%). Large discrepancy occur for the dose calculated without external expansion due to volume averaging. The addition of the gown only increased the measured skin dose by 0.4%. Conclusion: The implemented TPS underestimated the skin dose for breast treatments. Superficial dose calculation without external expansion would result in large errors for SPBT for breast cancer.« less

  15. Improving spot-scanning proton therapy patient specific quality assurance with HPlusQA, a second-check dose calculation engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackin, Dennis; Li, Yupeng; Taylor, Michael B.

    Purpose: The purpose of this study was to validate the use of HPlusQA, spot-scanning proton therapy (SSPT) dose calculation software developed at The University of Texas MD Anderson Cancer Center, as second-check dose calculation software for patient-specific quality assurance (PSQA). The authors also showed how HPlusQA can be used within the current PSQA framework.Methods: The authors compared the dose calculations of HPlusQA and the Eclipse treatment planning system with 106 planar dose measurements made as part of PSQA. To determine the relative performance and the degree of correlation between HPlusQA and Eclipse, the authors compared calculated with measured point doses.more » Then, to determine how well HPlusQA can predict when the comparisons between Eclipse calculations and the measured dose will exceed tolerance levels, the authors compared gamma index scores for HPlusQA versus Eclipse with those of measured doses versus Eclipse. The authors introduce the αβγ transformation as a way to more easily compare gamma scores.Results: The authors compared measured and calculated dose planes using the relative depth, z/R × 100%, where z is the depth of the measurement and R is the proton beam range. For relative depths than less than 80%, both Eclipse and HPlusQA calculations were within 2 cGy of dose measurements on average. When the relative depth was greater than 80%, the agreement between the calculations and measurements fell to 4 cGy. For relative depths less than 10%, the Eclipse and HPlusQA dose discrepancies showed a negative correlation, −0.21. Otherwise, the correlation between the dose discrepancies was positive and as large as 0.6. For the dose planes in this study, HPlusQA correctly predicted when Eclipse had and had not calculated the dose to within tolerance 92% and 79% of the time, respectively. In 4 of 106 cases, HPlusQA failed to predict when the comparison between measurement and Eclipse's calculation had exceeded the tolerance levels of 3% for dose and 3 mm for distance-to-agreement.Conclusions: The authors found HPlusQA to be reasonably effective (79%± 10%) in determining when the comparison between measured dose planes and the dose planes calculated by the Eclipse treatment planning system had exceeded the acceptable tolerance levels. When used as described in this study, HPlusQA can reduce the need for patient specific quality assurance measurements by 64%. The authors believe that the use of HPlusQA as a dose calculation second check can increase the efficiency and effectiveness of the QA process.« less

  16. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, D; O’Connell, D; Lamb, J

    Purpose: To demonstrate real-time dose calculation of free-breathing MRI guided Co−60 treatments, using a motion model and Monte-Carlo dose calculation to accurately account for the interplay between irregular breathing motion and an IMRT delivery. Methods: ViewRay Co-60 dose distributions were optimized on ITVs contoured from free-breathing CT images of lung cancer patients. Each treatment plan was separated into 0.25s segments, accounting for the MLC positions and beam angles at each time point. A voxel-specific motion model derived from multiple fast-helical free-breathing CTs and deformable registration was calculated for each patient. 3D images for every 0.25s of a simulated treatment weremore » generated in real time, here using a bellows signal as a surrogate to accurately account for breathing irregularities. Monte-Carlo dose calculation was performed every 0.25s of the treatment, with the number of histories in each calculation scaled to give an overall 1% statistical uncertainty. Each dose calculation was deformed back to the reference image using the motion model and accumulated. The static and real-time dose calculations were compared. Results: Image generation was performed in real time at 4 frames per second (GPU). Monte-Carlo dose calculation was performed at approximately 1frame per second (CPU), giving a total calculation time of approximately 30 minutes per treatment. Results show both cold- and hot-spots in and around the ITV, and increased dose to contralateral lung as the tumor moves in and out of the beam during treatment. Conclusion: An accurate motion model combined with a fast Monte-Carlo dose calculation allows almost real-time dose calculation of a free-breathing treatment. When combined with sagittal 2D-cine-mode MRI during treatment to update the motion model in real time, this will allow the true delivered dose of a treatment to be calculated, providing a useful tool for adaptive planning and assessing the effectiveness of gated treatments.« less

  18. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  19. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  20. Exchange and correlation effects on plasmon dispersions and Coulomb drag in low-density electron bilayers

    NASA Astrophysics Data System (ADS)

    Badalyan, S. M.; Kim, C. S.; Vignale, G.; Senatore, G.

    2007-03-01

    We investigate the effect of exchange and correlation (XC) on the plasmon spectrum and the Coulomb drag between spatially separated low-density two-dimensional electron layers. We adopt a different approach, which employs dynamic XC kernels in the calculation of the bilayer plasmon spectra and of the plasmon-mediated drag, and static many-body local field factors in the calculation of the particle-hole contribution to the drag. The spectrum of bilayer plasmons and the drag resistivity are calculated in a broad range of temperatures taking into account both intra- and interlayer correlation effects. We observe that both plasmon modes are strongly affected by XC corrections. After the inclusion of the complex dynamic XC kernels, a decrease of the electron density induces shifts of the plasmon branches in opposite directions. This is in stark contrast with the tendency observed within random phase approximation that both optical and acoustical plasmons move away from the boundary of the particle-hole continuum with a decrease in the electron density. We find that the introduction of XC corrections results in a significant enhancement of the transresistivity and qualitative changes in its temperature dependence. In particular, the large high-temperature plasmon peak that is present in the random phase approximation is found to disappear when the XC corrections are included. Our numerical results at low temperatures are in good agreement with the results of recent experiments by Kellogg [Solid State Commun. 123, 515 (2002)].

Top