Sample records for scatter kernel superposition

  1. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  2. Scatter correction for cone-beam computed tomography using self-adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Xie, Shi-Peng; Luo, Li-Min

    2012-06-01

    The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.

  3. Correction of scatter in megavoltage cone-beam CT

    NASA Astrophysics Data System (ADS)

    Spies, L.; Ebert, M.; Groh, B. A.; Hesse, B. M.; Bortfeld, T.

    2001-03-01

    The role of scatter in a cone-beam computed tomography system using the therapeutic beam of a medical linear accelerator and a commercial electronic portal imaging device (EPID) is investigated. A scatter correction method is presented which is based on a superposition of Monte Carlo generated scatter kernels. The kernels are adapted to both the spectral response of the EPID and the dimensions of the phantom being scanned. The method is part of a calibration procedure which converts the measured transmission data acquired for each projection angle into water-equivalent thicknesses. Tomographic reconstruction of the projections then yields an estimate of the electron density distribution of the phantom. It is found that scatter produces cupping artefacts in the reconstructed tomograms. Furthermore, reconstructed electron densities deviate greatly (by about 30%) from their expected values. The scatter correction method removes the cupping artefacts and decreases the deviations from 30% down to about 8%.

  4. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters

    NASA Astrophysics Data System (ADS)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-01

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  5. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters.

    PubMed

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-21

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  6. Rapid scatter estimation for CBCT using the Boltzmann transport equation

    NASA Astrophysics Data System (ADS)

    Sun, Mingshan; Maslowski, Alex; Davis, Ian; Wareing, Todd; Failla, Gregory; Star-Lack, Josh

    2014-03-01

    Scatter in cone-beam computed tomography (CBCT) is a significant problem that degrades image contrast, uniformity and CT number accuracy. One means of estimating and correcting for detected scatter is through an iterative deconvolution process known as scatter kernel superposition (SKS). While the SKS approach is efficient, clinically significant errors on the order 2-4% (20-40 HU) still remain. We have previously shown that the kernel method can be improved by perturbing the kernel parameters based on reference data provided by limited Monte Carlo simulations of a first-pass reconstruction. In this work, we replace the Monte Carlo modeling with a deterministic Boltzmann solver (AcurosCTS) to generate the reference scatter data in a dramatically reduced time. In addition, the algorithm is improved so that instead of adjusting kernel parameters, we directly perturb the SKS scatter estimates. Studies were conducted on simulated data and on a large pelvis phantom scanned on a tabletop system. The new method reduced average reconstruction errors (relative to a reference scan) from 2.5% to 1.8%, and significantly improved visualization of low contrast objects. In total, 24 projections were simulated with an AcurosCTS execution time of 22 sec/projection using an 8-core computer. We have ported AcurosCTS to the GPU, and current run-times are approximately 4 sec/projection using two GPU's running in parallel.

  7. Evaluation of a scattering correction method for high energy tomography

    NASA Astrophysics Data System (ADS)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where experimental complexities must be avoided. This approach has been previously tested successfully in the energy range of 100 keV - 6 MeV. In this paper, the kernels are simulated using MCNP in order to take into account both photons and electronic processes in scattering radiation contribution. We present scatter correction results on a large object scanned with a 9 MeV linear accelerator.

  8. Dosimetric verification of radiation therapy including intensity modulated treatments, using an amorphous-silicon electronic portal imaging device

    NASA Astrophysics Data System (ADS)

    Chytyk-Praznik, Krista Joy

    Radiation therapy is continuously increasing in complexity due to technological innovation in delivery techniques, necessitating thorough dosimetric verification. Comparing accurately predicted portal dose images to measured images obtained during patient treatment can determine if a particular treatment was delivered correctly. The goal of this thesis was to create a method to predict portal dose images that was versatile and accurate enough to use in a clinical setting. All measured images in this work were obtained with an amorphous silicon electronic portal imaging device (a-Si EPID), but the technique is applicable to any planar imager. A detailed, physics-motivated fluence model was developed to characterize fluence exiting the linear accelerator head. The model was further refined using results from Monte Carlo simulations and schematics of the linear accelerator. The fluence incident on the EPID was converted to a portal dose image through a superposition of Monte Carlo-generated, monoenergetic dose kernels specific to the a-Si EPID. Predictions of clinical IMRT fields with no patient present agreed with measured portal dose images within 3% and 3 mm. The dose kernels were applied ignoring the geometrically divergent nature of incident fluence on the EPID. A computational investigation into this parallel dose kernel assumption determined its validity under clinically relevant situations. Introducing a patient or phantom into the beam required the portal image prediction algorithm to account for patient scatter and attenuation. Primary fluence was calculated by attenuating raylines cast through the patient CT dataset, while scatter fluence was determined through the superposition of pre-calculated scatter fluence kernels. Total dose in the EPID was calculated by convolving the total predicted incident fluence with the EPID-specific dose kernels. The algorithm was tested on water slabs with square fields, agreeing with measurement within 3% and 3 mm. The method was then applied to five prostate and six head-and-neck IMRT treatment courses (˜1900 clinical images). Deviations between the predicted and measured images were quantified. The portal dose image prediction model developed in this thesis work has been shown to be accurate, and it was demonstrated to be able to verify patients' delivered radiation treatments.

  9. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques, Robert; Wong, John; Taylor, Russell

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less

  10. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays.

  11. Compactness and robustness: Applications in the solution of integral equations for chemical kinetics and electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Yajun

    This thesis employs the topological concept of compactness to deduce robust solutions to two integral equations arising from chemistry and physics: the inverse Laplace problem in chemical kinetics and the vector wave scattering problem in dielectric optics. The inverse Laplace problem occurs in the quantitative understanding of biological processes that exhibit complex kinetic behavior: different subpopulations of transition events from the "reactant" state to the "product" state follow distinct reaction rate constants, which results in a weighted superposition of exponential decay modes. Reconstruction of the rate constant distribution from kinetic data is often critical for mechanistic understandings of chemical reactions related to biological macromolecules. We devise a "phase function approach" to recover the probability distribution of rate constants from decay data in the time domain. The robustness (numerical stability) of this reconstruction algorithm builds upon the continuity of the transformations connecting the relevant function spaces that are compact metric spaces. The robust "phase function approach" not only is useful for the analysis of heterogeneous subpopulations of exponential decays within a single transition step, but also is generalizable to the kinetic analysis of complex chemical reactions that involve multiple intermediate steps. A quantitative characterization of the light scattering is central to many meteoro-logical, optical, and medical applications. We give a rigorous treatment to electromagnetic scattering on arbitrarily shaped dielectric media via the Born equation: an integral equation with a strongly singular convolution kernel that corresponds to a non-compact Green operator. By constructing a quadratic polynomial of the Green operator that cancels out the kernel singularity and satisfies the compactness criterion, we reveal the universality of a real resonance mode in dielectric optics. Meanwhile, exploiting the properties of compact operators, we outline the geometric and physical conditions that guarantee a robust solution to the light scattering problem, and devise an asymptotic solution to the Born equation of electromagnetic scattering for arbitrarily shaped dielectric in a non-perturbative manner.

  12. THERMOS. 30-Group ENDF/B Scattered Kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrosson, F.J.; Finch, D.R.

    1973-12-01

    These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code. To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tapemore » library from which they may be retrieved easily for use in any 30-group THERMOS library.« less

  13. A model of primary and scattered photon fluence for mammographic x-ray image quantification

    NASA Astrophysics Data System (ADS)

    Tromans, Christopher E.; Cocker, Mary R.; Brady, Michael, Sir

    2012-10-01

    We present an efficient method to calculate the primary and scattered x-ray photon fluence component of a mammographic image. This can be used for a range of clinically important purposes, including estimation of breast density, personalized image display, and quantitative mammogram analysis. The method is based on models of: the x-ray tube; the digital detector; and a novel ray tracer which models the diverging beam emanating from the focal spot. The tube model includes consideration of the anode heel effect, and empirical corrections for wear and manufacturing tolerances. The detector model is empirical, being based on a family of transfer functions that cover the range of beam qualities and compressed breast thicknesses which are encountered clinically. The scatter estimation utilizes optimal information sampling and interpolation (to yield a clinical usable computation time) of scatter calculated using fundamental physics relations. A scatter kernel arising around each primary ray is calculated, and these are summed by superposition to form the scatter image. Beam quality, spatial position in the field (in particular that arising at the air-boundary due to the depletion of scatter contribution from the surroundings), and the possible presence of a grid, are considered, as is tissue composition using an iterative refinement procedure. We present numerous validation results that use a purpose designed tissue equivalent step wedge phantom. The average differences between actual acquisitions and modelled pixel intensities observed across the adipose to fibroglandular attenuation range vary between 5% and 7%, depending on beam quality and, for a single beam quality are 2.09% and 3.36% respectively with and without a grid.

  14. Anisotropic hydrodynamics with a scalar collisional kernel

    NASA Astrophysics Data System (ADS)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  15. Dosimetric effects of seed anisotropy and interseed attenuation for {sup 103}Pd and {sup 125}I prostate implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chibani, Omar; Williamson, Jeffrey F.; Todor, Dorin

    2005-08-15

    A Monte Carlo study is carried out to quantify the effects of seed anisotropy and interseed attenuation for {sup 103}Pd and {sup 125}I prostate implants. Two idealized and two real prostate implants are considered. Full Monte Carlo simulation (FMCS) of implants (seeds are physically and simultaneously simulated) is compared with isotropic point-source dose-kernel superposition (PSKS) and line-source dose-kernel superposition (LSKS) methods. For clinical pre- and post-procedure implants, the dose to the different structures (prostate, rectum wall, and urethra) is calculated. The discretized volumes of these structures are reconstructed using transrectal ultrasound contours. Local dose differences (PSKS versus FMCS and LSKSmore » versus FMCS) are investigated. The dose contributions from primary versus scattered photons are calculated separately. For {sup 103}Pd, the average absolute total dose difference between FMCS and PSKS can be as high as 7.4% for the idealized model and 6.1% for the clinical preprocedure implant. Similarly, the total dose difference is lower for the case of {sup 125}I: 4.4% for the idealized model and 4.6% for a clinical post-procedure implant. Average absolute dose differences between LSKS and FMCS are less significant for both seed models: 3 to 3.6% for the idealized models and 2.9 to 3.2% for the clinical plans. Dose differences between PSKS and FMCS are due to the absence of both seed anisotropy and interseed attenuation modeling in the PSKS approach. LSKS accounts for seed anisotropy but not for the interseed effect, leading to systematically overestimated dose values in comparison with the more accurate FMCS method. For both idealized and clinical implants the dose from scattered photons represent less than 1/3 of the total dose. For all studied cases, LSKS prostate DVHs overestimate D{sub 90} by 2 to 5% because of the missing interseed attenuation effect. PSKS and LSKS predictions of V{sub 150} and V{sub 200} are overestimated by up to 9% in comparison with the FMCS results. Finally, effects of seed anisotropy and interseed attenuation must be viewed in the context of other significant sources of dose uncertainty, namely seed orientation, source misplacement, prostate morphological changes and tissue heterogeneity.« less

  16. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    PubMed

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  17. Data consistency-driven scatter kernel optimization for x-ray cone-beam CT

    NASA Astrophysics Data System (ADS)

    Kim, Changhwan; Park, Miran; Sung, Younghun; Lee, Jaehak; Choi, Jiyoung; Cho, Seungryong

    2015-08-01

    Accurate and efficient scatter correction is essential for acquisition of high-quality x-ray cone-beam CT (CBCT) images for various applications. This study was conducted to demonstrate the feasibility of using the data consistency condition (DCC) as a criterion for scatter kernel optimization in scatter deconvolution methods in CBCT. As in CBCT, data consistency in the mid-plane is primarily challenged by scatter, we utilized data consistency to confirm the degree of scatter correction and to steer the update in iterative kernel optimization. By means of the parallel-beam DCC via fan-parallel rebinning, we iteratively optimized the scatter kernel parameters, using a particle swarm optimization algorithm for its computational efficiency and excellent convergence. The proposed method was validated by a simulation study using the XCAT numerical phantom and also by experimental studies using the ACS head phantom and the pelvic part of the Rando phantom. The results showed that the proposed method can effectively improve the accuracy of deconvolution-based scatter correction. Quantitative assessments of image quality parameters such as contrast and structure similarity (SSIM) revealed that the optimally selected scatter kernel improves the contrast of scatter-free images by up to 99.5%, 94.4%, and 84.4%, and of the SSIM in an XCAT study, an ACS head phantom study, and a pelvis phantom study by up to 96.7%, 90.5%, and 87.8%, respectively. The proposed method can achieve accurate and efficient scatter correction from a single cone-beam scan without need of any auxiliary hardware or additional experimentation.

  18. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied.more » (authors)« less

  19. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  20. SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.

    PubMed

    Huang, J; Childress, N; Kry, S

    2012-06-01

    To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition () is 28.6° for water, 33.3° for lung, 36.0° for cortical bone, 44.6° for titanium, and 58.1° for silver, showing a dependence on the material in which the primary photon interacts. These high resolution and material-specific EDKs have implications for convolution/superposition dose calculations in heterogeneous patient geometries, especially at material interfaces. © 2012 American Association of Physicists in Medicine.

  1. Non-parametric wall model and methods of identifying boundary conditions for moments in gas flow equations

    NASA Astrophysics Data System (ADS)

    Liao, Meng; To, Quy-Dong; Léonard, Céline; Monchiet, Vincent

    2018-03-01

    In this paper, we use the molecular dynamics simulation method to study gas-wall boundary conditions. Discrete scattering information of gas molecules at the wall surface is obtained from collision simulations. The collision data can be used to identify the accommodation coefficients for parametric wall models such as Maxwell and Cercignani-Lampis scattering kernels. Since these scattering kernels are based on a limited number of accommodation coefficients, we adopt non-parametric statistical methods to construct the kernel to overcome these issues. Different from parametric kernels, the non-parametric kernels require no parameter (i.e. accommodation coefficients) and no predefined distribution. We also propose approaches to derive directly the Navier friction and Kapitza thermal resistance coefficients as well as other interface coefficients associated with moment equations from the non-parametric kernels. The methods are applied successfully to systems composed of CH4 or CO2 and graphite, which are of interest to the petroleum industry.

  2. The spatial sensitivity of Sp converted waves-kernels and their applications

    NASA Astrophysics Data System (ADS)

    Mancinelli, N. J.; Fischer, K. M.

    2017-12-01

    We have developed a framework for improved imaging of strong lateral variations in crust and upper mantle seismic discontinuity structure using teleseismic S-to-P (Sp) scattered waves. In our framework, we rapidly compute scattered wave sensitivities to velocity perturbations in a one-dimensional background model using ray-theoretical methods to account for timing, scattering, and geometrical spreading effects. The kernels accurately describe the amplitude and phase information of a scattered waveform, which we confirm by benchmarking against kernels derived from numerical solutions of the wave equation. The kernels demonstrate that the amplitude of an Sp converted wave at a given time is sensitive to structure along a quasi-hyperbolic curve, such that structure far from the direct ray path can influence the measurements. We use synthetic datasets to explore two potential applications of the scattered wave sensitivity kernels. First, we back-project scattered energy back to its origin using the kernel adjoint operator. This approach successfully images mantle interfaces at depths of 120-180 km with up to 20 km of vertical relief over lateral distances of 100 km (i.e., undulations with a maximal 20% grade) when station spacing is 10 km. Adjacent measurements sum coherently at nodes where gradients in seismic properties occur, and destructively interfere at nodes lacking gradients. In cases where the station spacing is greater than 10 km, the destructive interference can be incomplete, and smearing along the isochrons can occur. We demonstrate, however, that model smoothing can dampen these artifacts. This method is relatively fast, and accurately retrieves the positions of the interfaces, but it generally does not retrieve the strength of the velocity perturbations. Therefore, in our second approach, we attempt to invert directly for velocity perturbations from our reference model using an iterative conjugate-directions scheme.

  3. SU-E-J-135: Feasibility of Using Quantitative Cone Beam CT for Proton Adaptive Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jingqian, W; Wang, Q; Zhang, X

    2015-06-15

    Purpose: To investigate the feasibility of using scatter corrected cone beam CT (CBCT) for proton adaptive planning. Methods: Phantom study was used to evaluate the CT number difference between the planning CT (pCT), quantitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units using adaptive scatter kernel superposition (ASKS) technique, and raw CBCT (rCBCT). After confirming the CT number accuracy, prostate patients, each with a pCT and several sets of weekly CBCT, were investigated for this study. Spot scanning proton treatment plans were independently generated on pCT, qCBCT and rCBCT. The treatment plans were then recalculated on all images. Dose-volume-histogrammore » (DVH) parameters and gamma analysis were used to compare between dose distributions. Results: Phantom study suggested that Hounsfield unit accuracy for different materials are within 20 HU for qCBCT and over 250 HU for rCBCT. For prostate patients, proton dose could be calculated accurately on qCBCT but not on rCBCT. When the original plan was recalculated on qCBCT, tumor coverage was maintained when anatomy was consistent with pCT. However, large dose variance was observed when patient anatomy change. Adaptive plan using qCBCT was able to recover tumor coverage and reduce dose to normal tissue. Conclusion: It is feasible to use qu antitative CBCT (qCBCT) with scatter correction and calibrated Hounsfield units for proton dose calculation and adaptive planning in proton therapy. Partly supported by Varian Medical Systems.« less

  4. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  5. Boundary conditions for gas flow problems from anisotropic scattering kernels

    NASA Astrophysics Data System (ADS)

    To, Quy-Dong; Vu, Van-Huyen; Lauriat, Guy; Léonard, Céline

    2015-10-01

    The paper presents an interface model for gas flowing through a channel constituted of anisotropic wall surfaces. Using anisotropic scattering kernels and Chapman Enskog phase density, the boundary conditions (BCs) for velocity, temperature, and discontinuities including velocity slip and temperature jump at the wall are obtained. Two scattering kernels, Dadzie and Méolans (DM) kernel, and generalized anisotropic Cercignani-Lampis (ACL) are examined in the present paper, yielding simple BCs at the wall fluid interface. With these two kernels, we rigorously recover the analytical expression for orientation dependent slip shown in our previous works [Pham et al., Phys. Rev. E 86, 051201 (2012) and To et al., J. Heat Transfer 137, 091002 (2015)] which is in good agreement with molecular dynamics simulation results. More important, our models include both thermal transpiration effect and new equations for the temperature jump. While the same expression depending on the two tangential accommodation coefficients is obtained for slip velocity, the DM and ACL temperature equations are significantly different. The derived BC equations associated with these two kernels are of interest for the gas simulations since they are able to capture the direction dependent slip behavior of anisotropic interfaces.

  6. Milne problem for non-absorbing medium with extremely anisotropic scattering kernel in the case of specular and diffuse reflecting boundaries

    NASA Astrophysics Data System (ADS)

    Güleçyüz, M. Ç.; Şenyiğit, M.; Ersoy, A.

    2018-01-01

    The Milne problem is studied in one speed neutron transport theory using the linearly anisotropic scattering kernel which combines forward and backward scatterings (extremely anisotropic scattering) for a non-absorbing medium with specular and diffuse reflection boundary conditions. In order to calculate the extrapolated endpoint for the Milne problem, Legendre polynomial approximation (PN method) is applied and numerical results are tabulated for selected cases as a function of different degrees of anisotropic scattering. Finally, some results are discussed and compared with the existing results in literature.

  7. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons

    NASA Astrophysics Data System (ADS)

    Sanchez-Parcerisa, D.; Cortés-Giraldo, M. A.; Dolney, D.; Kondrla, M.; Fager, M.; Carabe, A.

    2016-02-01

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm-1) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  8. Analytical calculation of proton linear energy transfer in voxelized geometries including secondary protons.

    PubMed

    Sanchez-Parcerisa, D; Cortés-Giraldo, M A; Dolney, D; Kondrla, M; Fager, M; Carabe, A

    2016-02-21

    In order to integrate radiobiological modelling with clinical treatment planning for proton radiotherapy, we extended our in-house treatment planning system FoCa with a 3D analytical algorithm to calculate linear energy transfer (LET) in voxelized patient geometries. Both active scanning and passive scattering delivery modalities are supported. The analytical calculation is much faster than the Monte-Carlo (MC) method and it can be implemented in the inverse treatment planning optimization suite, allowing us to create LET-based objectives in inverse planning. The LET was calculated by combining a 1D analytical approach including a novel correction for secondary protons with pencil-beam type LET-kernels. Then, these LET kernels were inserted into the proton-convolution-superposition algorithm in FoCa. The analytical LET distributions were benchmarked against MC simulations carried out in Geant4. A cohort of simple phantom and patient plans representing a wide variety of sites (prostate, lung, brain, head and neck) was selected. The calculation algorithm was able to reproduce the MC LET to within 6% (1 standard deviation) for low-LET areas (under 1.7 keV μm(-1)) and within 22% for the high-LET areas above that threshold. The dose and LET distributions can be further extended, using radiobiological models, to include radiobiological effectiveness (RBE) calculations in the treatment planning system. This implementation also allows for radiobiological optimization of treatments by including RBE-weighted dose constraints in the inverse treatment planning process.

  9. Wilson loops and QCD/string scattering amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makeenko, Yuri; Olesen, Poul; Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen O

    2009-07-15

    We generalize modern ideas about the duality between Wilson loops and scattering amplitudes in N=4 super Yang-Mills theory to large N QCD by deriving a general relation between QCD meson scattering amplitudes and Wilson loops. We then investigate properties of the open-string disk amplitude integrated over reparametrizations. When the Wilson-loop is approximated by the area behavior, we find that the QCD scattering amplitude is a convolution of the standard Koba-Nielsen integrand and a kernel. As usual poles originate from the first factor, whereas no (momentum-dependent) poles can arise from the kernel. We show that the kernel becomes a constant whenmore » the number of external particles becomes large. The usual Veneziano amplitude then emerges in the kinematical regime, where the Wilson loop can be reliably approximated by the area behavior. In this case, we obtain a direct duality between Wilson loops and scattering amplitudes when spatial variables and momenta are interchanged, in analogy with the N=4 super Yang-Mills theory case.« less

  10. A simple and fast method for computing the relativistic Compton Scattering Kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas

    1986-01-01

    The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.

  11. Propagation and Directional Scattering of Ocean Waves in the Marginal Ice Zone and Neighboring Seas

    DTIC Science & Technology

    2015-09-30

    expected to be the average of the kernel for 10 s and 12 s. This means that we should be able to calculate empirical formulas for 2 the scattering kernel...floe packing. Thus, establish a way to incorporate what has been done by Squire and co-workers into the wave model paradigm (in which the phase of the...cases observed by Kohout et al. (2014) in Antarctica . vii. Validation: We are planning validation tests for wave-ice scattering / attenuation model by

  12. ENDF/B-THERMOS; 30-group ENDF/B scattering kernels. [Auxiliary program written in FORTRAN IV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrosson, F.J.; Finch, D.R.

    These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code. To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tapemore » library from which they may be retrieved easily for use in any 30-group THERMOS library. The contents of the tapes are as follows - VOLUME I Material ZA Temperatures (degrees K) Molecular H2O 100.0 296, 350, 400, 450, 500, 600, 800, 1000 Molecular D2O 101.0 296, 350, 400, 450, 500, 600, 800, 1000 Graphite 6000.0 296, 400, 500, 600, 700, 800, 1000, 1200, 1600, 2000 Polyethylene 205.0 296, 350 Benzene 106.0 296, 350, 400, 450, 500, 600, 800, 1000 VOLUME II Material ZA Temperatures (degrees K) Zr bound in ZrHx 203.0 296, 400, 500, 600, 700, 800, 1000, 1200 H bound in ZrHx 230.0 296, 400, 500, 600, 700, 800, 1000, 1200 Beryllium-9 4009.0 296, 400, 500, 600, 700, 800, 1000, 1200 Beryllium Oxide 200.0 296, 400, 500, 600, 700, 800, 1000, 1200 Uranium Dioxide 207.0 296, 400, 500, 600, 700, 800, 1000, 1200Auxiliary program written in FORTRAN IV; The retrieval program requires 1 tape drive and a small amount of high-speed core.« less

  13. Transient radiative transfer in a scattering slab considering polarization.

    PubMed

    Yi, Hongliang; Ben, Xun; Tan, Heping

    2013-11-04

    The characteristics of the transient and polarization must be considered for a complete and correct description of short-pulse laser transfer in a scattering medium. A Monte Carlo (MC) method combined with a time shift and superposition principle is developed to simulate transient vector (polarized) radiative transfer in a scattering medium. The transient vector radiative transfer matrix (TVRTM) is defined to describe the transient polarization behavior of short-pulse laser propagating in the scattering medium. According to the definition of reflectivity, a new criterion of reflection at Fresnel surface is presented. In order to improve the computational efficiency and accuracy, a time shift and superposition principle is applied to the MC model for transient vector radiative transfer. The results for transient scalar radiative transfer and steady-state vector radiative transfer are compared with those in published literatures, respectively, and an excellent agreement between them is observed, which validates the correctness of the present model. Finally, transient radiative transfer is simulated considering the polarization effect of short-pulse laser in a scattering medium, and the distributions of Stokes vector in angular and temporal space are presented.

  14. Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.

    2014-01-01

    The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.

  15. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    NASA Technical Reports Server (NTRS)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  16. Reconstruction of transient vibration and sound radiation of an impacted plate using time domain plane wave superposition method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Zhang, Xiao-Zheng; Bi, Chuan-Xing

    2015-05-01

    Time domain plane wave superposition method is extended to reconstruct the transient pressure field radiated by an impacted plate and the normal acceleration of the plate. In the extended method, the pressure measured on the hologram plane is expressed as a superposition of time convolutions between the time-wavenumber normal acceleration spectrum on a virtual source plane and the time domain propagation kernel relating the pressure on the hologram plane to the normal acceleration spectrum on the virtual source plane. By performing an inverse operation, the normal acceleration spectrum on the virtual source plane can be obtained by an iterative solving process, and then taken as the input to reconstruct the whole pressure field and the normal acceleration of the plate. An experiment of a clamped rectangular steel plate impacted by a steel ball is presented. The experimental results demonstrate that the extended method is effective in visualizing the transient vibration and sound radiation of an impacted plate in both time and space domains, thus providing the important information for overall understanding the vibration and sound radiation of the plate.

  17. The spatial sensitivity of Sp converted waves—scattered-wave kernels and their applications to receiver-function migration and inversion

    NASA Astrophysics Data System (ADS)

    Mancinelli, N. J.; Fischer, K. M.

    2018-03-01

    We characterize the spatial sensitivity of Sp converted waves to improve constraints on lateral variations in uppermost-mantle velocity gradients, such as the lithosphere-asthenosphere boundary (LAB) and the mid-lithospheric discontinuities. We use SPECFEM2D to generate 2-D scattering kernels that relate perturbations from an elastic half-space to Sp waveforms. We then show that these kernels can be well approximated using ray theory, and develop an approach to calculating kernels for layered background models. As proof of concept, we show that lateral variations in uppermost-mantle discontinuity structure are retrieved by implementing these scattering kernels in the first iteration of a conjugate-directions inversion algorithm. We evaluate the performance of this technique on synthetic seismograms computed for 2-D models with undulations on the LAB of varying amplitude, wavelength and depth. The technique reliably images the position of discontinuities with dips <35° and horizontal wavelengths >100-200 km. In cases of mild topography on a shallow LAB, the relative brightness of the LAB and Moho converters approximately agrees with the ratio of velocity contrasts across the discontinuities. Amplitude retrieval degrades at deeper depths. For dominant periods of 4 s, the minimum station spacing required to produce unaliased results is 5 km, but the application of a Gaussian filter can improve discontinuity imaging where station spacing is greater.

  18. P- and S-wave Receiver Function Imaging with Scattering Kernels

    NASA Astrophysics Data System (ADS)

    Hansen, S. M.; Schmandt, B.

    2017-12-01

    Full waveform inversion provides a flexible approach to the seismic parameter estimation problem and can account for the full physics of wave propagation using numeric simulations. However, this approach requires significant computational resources due to the demanding nature of solving the forward and adjoint problems. This issue is particularly acute for temporary passive-source seismic experiments (e.g. PASSCAL) that have traditionally relied on teleseismic earthquakes as sources resulting in a global scale forward problem. Various approximation strategies have been proposed to reduce the computational burden such as hybrid methods that embed a heterogeneous regional scale model in a 1D global model. In this study, we focus specifically on the problem of scattered wave imaging (migration) using both P- and S-wave receiver function data. The proposed method relies on body-wave scattering kernels that are derived from the adjoint data sensitivity kernels which are typically used for full waveform inversion. The forward problem is approximated using ray theory yielding a computationally efficient imaging algorithm that can resolve dipping and discontinuous velocity interfaces in 3D. From the imaging perspective, this approach is closely related to elastic reverse time migration. An energy stable finite-difference method is used to simulate elastic wave propagation in a 2D hypothetical subduction zone model. The resulting synthetic P- and S-wave receiver function datasets are used to validate the imaging method. The kernel images are compared with those generated by the Generalized Radon Transform (GRT) and Common Conversion Point stacking (CCP) methods. These results demonstrate the potential of the kernel imaging approach to constrain lithospheric structure in complex geologic environments with sufficiently dense recordings of teleseismic data. This is demonstrated using a receiver function dataset from the Central California Seismic Experiment which shows several dipping interfaces related to the tectonic assembly of this region. Figure 1. Scattering kernel examples for three receiver function phases. A) direct P-to-s (Ps), B) direct S-to-p and C) free-surface PP-to-s (PPs).

  19. Inverse problems and coherence

    NASA Astrophysics Data System (ADS)

    Baltes, H. P.; Ferwerda, H. A.

    1981-03-01

    A summary of current inverse problems of statistical optics is presented together with a short guide to the pertinent review-type literature. The retrieval of structural information from the far-zone degree of coherence and the average intensity distribution of radiation scattered by a superposition of random and periodic scatterers is discussed.

  20. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kieselmann, J; Bartzsch, S; Oelfke, U

    Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less

  2. Light Scattering by Wavelength-Sized Particles "Dusted" with Subwavelength-Sized Grains

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2011-01-01

    The numerically exact superposition T-matrix method is used to compute the scattering cross sections and the Stokes scattering matrix for polydisperse spherical particles covered with a large number of much smaller grains. We show that the optical effect of the presence of microscopic dust on the surfaces of wavelength-sized, weakly absorbing particles is much less significant than that of a major overall asphericity of the particle shape.

  3. Multi-Group Formulation of the Temperature-Dependent Resonance Scattering Model and its Impact on Reactor Core Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, Shadi Z.; Ougouag, Abderrafi M.; Ouisloumen, Mohamed

    2014-01-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. It incorporates the neutron up-scattering effects, stemming from lattice atoms thermal motion and accounts for it within the resulting effective nuclear cross-section data. The effects pertain essentially to resonant scattering off of heavy nuclei. The formulation, implemented into a standalone code, produces effective nuclear scattering data that are then supplied directly into the DRAGON lattice physics code where the effects on Doppler Reactivity and neutron flux are demonstrated. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering,more » which in turn affect the estimation of core reactivity and burnup characteristics. The results show an increase in values of Doppler temperature feedback coefficients up to -10% for UOX and MOX LWR fuels compared to the corresponding values derived using the traditional asymptotic elastic scattering kernel. This paper also summarizes the results done on this topic to date.« less

  4. Investigation on the properties of the formation and coherence of intense fringe near nonlinear medium slab

    NASA Astrophysics Data System (ADS)

    Hu, Yonghua; Qiu, Yaqiong; Li, Yang; Shi, Lin

    2018-03-01

    Near medium intense (NMI) fringe is a kind of intense fringe which can be formed near Kerr medium in high-power laser beam propagation. The formation properties of NMI fringe and the relations between NMI fringe and related important parameters are systematically investigated. It is found that it is the co-existence of two wirelike phase-typed scatterers in the incident beam spot which is mainly responsible for the high intensity of NMI fringe. From the viewpoint of coherent superposition, the formation process of NMI fringe is analyzed, and the mechanism that NMI fringe is formed by the coherent superposition of the localized bright fringes in the exit field of Kerr medium slab is demonstrated. The fluctuations of NMI fringe properties with beam wavelength, scatterer spacing and object distance are studied, the coherence of NMI fringe are revealed, and the approximate periodicity of the appearance of remarkable NMI fringe for these parameters are obtained. Especially, it is found that the intensity of NMI fringe is very sensitive to scatterer spacing. Besides, the laws about how NMI fringe properties will be changed by the modulation properties of scatterers and the medium thickness are demonstrated.

  5. Possibility to Probe Negative Values of a Wigner Function in Scattering of a Coherent Superposition of Electronic Wave Packets by Atoms.

    PubMed

    Karlovets, Dmitry V; Serbo, Valeriy G

    2017-10-27

    Within a plane-wave approximation in scattering, an incoming wave packet's Wigner function stays positive everywhere, which obscures such purely quantum phenomena as nonlocality and entanglement. With the advent of the electron microscopes with subnanometer-sized beams, one can enter a genuinely quantum regime where the latter effects become only moderately attenuated. Here we show how to probe negative values of the Wigner function in scattering of a coherent superposition of two Gaussian packets with a nonvanishing impact parameter between them (a Schrödinger's cat state) by atomic targets. For hydrogen in the ground 1s state, a small parameter of the problem, a ratio a/σ_{⊥} of the Bohr radius a to the beam width σ_{⊥}, is no longer vanishing. We predict an azimuthal asymmetry of the scattered electrons, which is found to be up to 10%, and argue that it can be reliably detected. The production of beams with the not-everywhere-positive Wigner functions and the probing of such quantum effects can open new perspectives for noninvasive electron microscopy, quantum tomography, particle physics, and so forth.

  6. JaSTA-2: Second version of the Java Superposition T-matrix Application

    NASA Astrophysics Data System (ADS)

    Halder, Prithish; Das, Himadri Sekhar

    2017-12-01

    In this article, we announce the development of a new version of the Java Superposition T-matrix App (JaSTA-2), to study the light scattering properties of porous aggregate particles. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precision superposition T-matrix codes for multi-sphere clusters in random orientation, developed by Mackowski and Mischenko (1996). The new version consists of two options as part of the input parameters: (i) single wavelength and (ii) multiple wavelengths. The first option (which retains the applicability of older version of JaSTA) calculates the light scattering properties of aggregates of spheres for a single wavelength at a given instant of time whereas the second option can execute the code for a multiple numbers of wavelengths in a single run. JaSTA-2 provides convenient and quicker data analysis which can be used in diverse fields like Planetary Science, Atmospheric Physics, Nanoscience, etc. This version of the software is developed for Linux platform only, and it can be operated over all the cores of a processor using the multi-threading option.

  7. Active impulsive noise control using maximum correntropy with adaptive kernel size

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Zhao, Haiquan

    2017-03-01

    The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.

  8. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  9. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy.

    PubMed

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-21

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  10. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, J; Followill, D; Howell, R

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titaniummore » and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus, these two strategies do have the potential to improve accuracy for patients with metal implants in certain scenarios. This work was supported by Public Health Service grants CA 180803 and CA 10953 awarded by the National Cancer Institute, United States of Health and Human Services, and in part by Mobius Medical Systems.« less

  11. Effects of absorption on multiple scattering by random particulate media: exact results.

    PubMed

    Mishchenko, Michael I; Liu, Li; Hovenier, Joop W

    2007-10-01

    We employ the numerically exact superposition T-matrix method to perform extensive computations of elec nottromagnetic scattering by a volume of discrete random medium densely filled with increasingly absorbing as well as non-absorbing particles. Our numerical data demonstrate that increasing absorption diminishes and nearly extinguishes certain optical effects such as depolarization and coherent backscattering and increases the angular width of coherent backscattering patterns. This result corroborates the multiple-scattering origin of such effects and further demonstrates the heuristic value of the concept of multiple scattering even in application to densely packed particulate media.

  12. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    NASA Astrophysics Data System (ADS)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  13. Electromagnetic Scattering by Spheroidal Volumes of Discrete Random Medium

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2017-01-01

    We use the superposition T-matrix method to compare the far-field scattering matrices generated by spheroidal and spherical volumes of discrete random medium having the same volume and populated by identical spherical particles. Our results fully confirm the robustness of the previously identified coherent and diffuse scattering regimes and associated optical phenomena exhibited by spherical particulate volumes and support their explanation in terms of the interference phenomenon coupled with the order-of-scattering expansion of the far-field Foldy equations. We also show that increasing non-sphericity of particulate volumes causes discernible (albeit less pronounced) optical effects in forward and backscattering directions and explain them in terms of the same interference/multiple-scattering phenomenon.

  14. TH-C-BRD-04: Beam Modeling and Validation with Triple and Double Gaussian Dose Kernel for Spot Scanning Proton Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Takayanagi, T; Fujii, Y

    2014-06-15

    Purpose: To present the validity of our beam modeling with double and triple Gaussian dose kernels for spot scanning proton beams in Nagoya Proton Therapy Center. This study investigates the conformance between the measurements and calculation results in absolute dose with two types of beam kernel. Methods: A dose kernel is one of the important input data required for the treatment planning software. The dose kernel is the 3D dose distribution of an infinitesimal pencil beam of protons in water and consists of integral depth doses and lateral distributions. We have adopted double and triple Gaussian model as lateral distributionmore » in order to take account of the large angle scattering due to nuclear reaction by fitting simulated inwater lateral dose profile for needle proton beam at various depths. The fitted parameters were interpolated as a function of depth in water and were stored as a separate look-up table for the each beam energy. The process of beam modeling is based on the method of MDACC [X.R.Zhu 2013]. Results: From the comparison results between the absolute doses calculated by double Gaussian model and those measured at the center of SOBP, the difference is increased up to 3.5% in the high-energy region because the large angle scattering due to nuclear reaction is not sufficiently considered at intermediate depths in the double Gaussian model. In case of employing triple Gaussian dose kernels, the measured absolute dose at the center of SOBP agrees with calculation within ±1% regardless of the SOBP width and maximum range. Conclusion: We have demonstrated the beam modeling results of dose distribution employing double and triple Gaussian dose kernel. Treatment planning system with the triple Gaussian dose kernel has been successfully verified and applied to the patient treatment with a spot scanning technique in Nagoya Proton Therapy Center.« less

  15. MCViNE- An object oriented Monte Carlo neutron ray tracing simulation package

    DOE PAGES

    Lin, J. Y. Y.; Smith, Hillary L.; Granroth, Garrett E.; ...

    2015-11-28

    MCViNE (Monte-Carlo VIrtual Neutron Experiment) is an open-source Monte Carlo (MC) neutron ray-tracing software for performing computer modeling and simulations that mirror real neutron scattering experiments. We exploited the close similarity between how instrument components are designed and operated and how such components can be modeled in software. For example we used object oriented programming concepts for representing neutron scatterers and detector systems, and recursive algorithms for implementing multiple scattering. Combining these features together in MCViNE allows one to handle sophisticated neutron scattering problems in modern instruments, including, for example, neutron detection by complex detector systems, and single and multiplemore » scattering events in a variety of samples and sample environments. In addition, MCViNE can use simulation components from linear-chain-based MC ray tracing packages which facilitates porting instrument models from those codes. Furthermore it allows for components written solely in Python, which expedites prototyping of new components. These developments have enabled detailed simulations of neutron scattering experiments, with non-trivial samples, for time-of-flight inelastic instruments at the Spallation Neutron Source. Examples of such simulations for powder and single-crystal samples with various scattering kernels, including kernels for phonon and magnon scattering, are presented. As a result, with simulations that closely reproduce experimental results, scattering mechanisms can be turned on and off to determine how they contribute to the measured scattering intensities, improving our understanding of the underlying physics.« less

  16. A new treatment of nonlocality in scattering process

    NASA Astrophysics Data System (ADS)

    Upadhyay, N. J.; Bhagwat, A.; Jain, B. K.

    2018-01-01

    Nonlocality in the scattering potential leads to an integro-differential equation. In this equation nonlocality enters through an integral over the nonlocal potential kernel. The resulting Schrödinger equation is usually handled by approximating r,{r}{\\prime }-dependence of the nonlocal kernel. The present work proposes a novel method to solve the integro-differential equation. The method, using the mean value theorem of integral calculus, converts the nonhomogeneous term to a homogeneous term. The effective local potential in this equation turns out to be energy independent, but has relative angular momentum dependence. This method is accurate and valid for any form of nonlocality. As illustrative examples, the total and differential cross sections for neutron scattering off 12C, 56Fe and 100Mo nuclei are calculated with this method in the low energy region (up to 10 MeV) and are found to be in reasonable accord with the experiments.

  17. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results:more » A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.« less

  18. Sub-second pencil beam dose calculation on GPU for adaptive proton therapy

    NASA Astrophysics Data System (ADS)

    da Silva, Joakim; Ansorge, Richard; Jena, Rajesh

    2015-06-01

    Although proton therapy delivered using scanned pencil beams has the potential to produce better dose conformity than conventional radiotherapy, the created dose distributions are more sensitive to anatomical changes and patient motion. Therefore, the introduction of adaptive treatment techniques where the dose can be monitored as it is being delivered is highly desirable. We present a GPU-based dose calculation engine relying on the widely used pencil beam algorithm, developed for on-line dose calculation. The calculation engine was implemented from scratch, with each step of the algorithm parallelized and adapted to run efficiently on the GPU architecture. To ensure fast calculation, it employs several application-specific modifications and simplifications, and a fast scatter-based implementation of the computationally expensive kernel superposition step. The calculation time for a skull base treatment plan using two beam directions was 0.22 s on an Nvidia Tesla K40 GPU, whereas a test case of a cubic target in water from the literature took 0.14 s to calculate. The accuracy of the patient dose distributions was assessed by calculating the γ-index with respect to a gold standard Monte Carlo simulation. The passing rates were 99.2% and 96.7%, respectively, for the 3%/3 mm and 2%/2 mm criteria, matching those produced by a clinical treatment planning system.

  19. The HCO+-H2 van der Waals interaction: Potential energy and scattering

    NASA Astrophysics Data System (ADS)

    Massó, H.; Wiesenfeld, L.

    2014-11-01

    We compute the rigid-body, four-dimensional interaction potential between HCO+ and H2. The ab initio energies are obtained at the coupled-cluster single double triple level of theory, corrected for Basis Set Superposition Errors. The ab initio points are fit onto the spherical basis relevant for quantum scattering. We present elastic and rotationally inelastic coupled channels scattering between low lying rotational levels of HCO+ and para-/ortho-H2. Results are compared with similar earlier computations with He or isotropic para-H2 as the projectile. Computations agree with earlier pressure broadening measurements.

  20. The HCO⁺-H₂ van der Waals interaction: potential energy and scattering.

    PubMed

    Massó, H; Wiesenfeld, L

    2014-11-14

    We compute the rigid-body, four-dimensional interaction potential between HCO(+) and H2. The ab initio energies are obtained at the coupled-cluster single double triple level of theory, corrected for Basis Set Superposition Errors. The ab initio points are fit onto the spherical basis relevant for quantum scattering. We present elastic and rotationally inelastic coupled channels scattering between low lying rotational levels of HCO(+) and para-/ortho-H2. Results are compared with similar earlier computations with He or isotropic para-H2 as the projectile. Computations agree with earlier pressure broadening measurements.

  1. Radiative transfer theory for a random distribution of low velocity spheres as resonant isotropic scatterers

    NASA Astrophysics Data System (ADS)

    Sato, Haruo; Hayakawa, Toshihiko

    2014-10-01

    Short-period seismograms of earthquakes are complex especially beneath volcanoes, where the S wave mean free path is short and low velocity bodies composed of melt or fluid are expected in addition to random velocity inhomogeneities as scattering sources. Resonant scattering inherent in a low velocity body shows trap and release of waves with a delay time. Focusing of the delay time phenomenon, we have to consider seriously multiple resonant scattering processes. Since wave phases are complex in such a scattering medium, the radiative transfer theory has been often used to synthesize the variation of mean square (MS) amplitude of waves; however, resonant scattering has not been well adopted in the conventional radiative transfer theory. Here, as a simple mathematical model, we study the sequence of isotropic resonant scattering of a scalar wavelet by low velocity spheres at low frequencies, where the inside velocity is supposed to be low enough. We first derive the total scattering cross-section per time for each order of scattering as the convolution kernel representing the decaying scattering response. Then, for a random and uniform distribution of such identical resonant isotropic scatterers, we build the propagator of the MS amplitude by using causality, a geometrical spreading factor and the scattering loss. Using those propagators and convolution kernels, we formulate the radiative transfer equation for a spherically impulsive radiation from a point source. The synthesized MS amplitude time trace shows a dip just after the direct arrival and a delayed swelling, and then a decaying tail at large lapse times. The delayed swelling is a prominent effect of resonant scattering. The space distribution of synthesized MS amplitude shows a swelling near the source region in space, and it becomes a bell shape like a diffusion solution at large lapse times.

  2. Indetermination of particle sizing by laser diffraction in the anomalous size ranges

    NASA Astrophysics Data System (ADS)

    Pan, Linchao; Ge, Baozhen; Zhang, Fugen

    2017-09-01

    The laser diffraction method is widely used to measure particle size distributions. It is generally accepted that the scattering angle becomes smaller and the angles to the location of the main peak of scattered energy distributions in laser diffraction instruments shift to smaller values with increasing particle size. This specific principle forms the foundation of the laser diffraction method. However, this principle is not entirely correct for non-absorbing particles in certain size ranges and these particle size ranges are called anomalous size ranges. Here, we derive the analytical formulae for the bounds of the anomalous size ranges and discuss the influence of the width of the size segments on the signature of the Mie scattering kernel. This anomalous signature of the Mie scattering kernel will result in an indetermination of the particle size distribution when measured by laser diffraction instruments in the anomalous size ranges. By using the singular-value decomposition method we interpret the mechanism of occurrence of this indetermination in detail and then validate its existence by using inversion simulations.

  3. Thin optical display panel

    DOEpatents

    Veligdan, James Thomas

    1997-01-01

    An optical display includes a plurality of optical waveguides each including a cladding bound core for guiding internal display light between first and second opposite ends by total internal reflection. The waveguides are stacked together to define a collective display thickness. Each of the cores includes a heterogeneous portion defining a light scattering site disposed longitudinally between the first and second ends. Adjacent ones of the sites are longitudinally offset from each other for forming a longitudinal internal image display over the display thickness upon scattering of internal display light thereagainst for generating a display image. In a preferred embodiment, the waveguides and scattering sites are transparent for transmitting therethrough an external image in superposition with the display image formed by scattering the internal light off the scattering sites for defining a heads up display.

  4. Simulation the Effect of Internal Wave on the Acoustic Propagation

    NASA Astrophysics Data System (ADS)

    Ko, D. S.

    2005-05-01

    An acoustic radiation transport model with the Monte Carlo solution has been developed and applied to study the effect of internal wave induced random oceanic fluctuations on the deep ocean acoustic propagation. Refraction in the ocean sound channel is performed by means of bi-cubic spline interpolation of discrete deterministic ray paths in the angle(energy)-range-depth coordinates. Scattering by random internal wave fluctuations is accomplished by sampling a power law scattering kernel applying the rejection method. Results from numerical experiments show that the mean positions of acoustic rays are significantly displaced tending toward the sound channel axis due to the asymmetry of the scattering kernel. The spreading of ray depths and angles about the means depends strongly on frequency. The envelope of the ray displacement spreading is found to be proportional to the square root of range which is different from "3/2 law" found in the non-channel case. Suppression of the spreading is due to the anisotropy of fluctuations and especially due to the presence of sound channel itself.

  5. Adhesion of Mineral and Soot Aerosols can Strongly Affect their Scattering and Absorption Properties

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Jana M.

    2012-01-01

    We use the numerically exact superposition T-matrix method to compute the optical cross sections and the Stokes scattering matrix for polydisperse mineral aerosols (modeled as homogeneous spheres) covered with a large number of much smaller soot particles. These results are compared with the Lorenz-Mie results for a uniform external mixture of mineral and soot aerosols. We show that the effect of soot particles adhering to large mineral particles can be to change the extinction and scattering cross sections and the asymmetry parameter quite substantially. The effect on the phase function and degree of linear polarization can be equally significant.

  6. Random medium model for cusping of plane waves.

    PubMed

    Li, Jia; Korotkova, Olga

    2017-09-01

    We introduce a model for a three-dimensional (3D) Schell-type stationary medium whose degree of potential's correlation satisfies the Fractional Multi-Gaussian (FMG) function. Compared with the scattered profile produced by the Gaussian Schell-model (GSM) medium, the Fractional Multi-Gaussian Schell-model (FMGSM) medium gives rise to a sharp concave intensity apex in the scattered field. This implies that the FMGSM medium also accounts for a larger than Gaussian's power in the bucket (PIB) in the forward scattering direction, hence being a better candidate than the GSM medium for generating highly-focused (cusp-like) scattered profiles in the far zone. Compared to other mathematical models for the medium's correlation function which can produce similar cusped scattered profiles the FMG function offers unprecedented tractability being the weighted superposition of Gaussian functions. Our results provide useful applications to energy counter problems and particle manipulation by weakly scattered fields.

  7. Scatter of X-rays on polished surfaces

    NASA Technical Reports Server (NTRS)

    Hasinger, G.

    1981-01-01

    In investigating the dispersion properties of telescope mirrors used in X-ray astronomy, the slight scattering characteristics of X-ray radiation by statistically rough surfaces were examined. The mathematics and geometry of scattering theory are described. The measurement test assembly is described and results of measurements on samples of plane mirrors are given. Measurement results are evaluated. The direct beam, the convolution of the direct beam and the scattering halo, curve fitting by the method of least squares, various autocorrelation functions, results of the fitting procedure for small scattering, and deviations in the kernel of the scattering distribution are presented. A procedure for quality testing of mirror systems through diagnosis of rough surfaces is described.

  8. Sensitivities Kernels of Seismic Traveltimes and Amplitudes for Quality Factor and Boundary Topography

    NASA Astrophysics Data System (ADS)

    Hsieh, M.; Zhao, L.; Ma, K.

    2010-12-01

    Finite-frequency approach enables seismic tomography to fully utilize the spatial and temporal distributions of the seismic wavefield to improve resolution. In achieving this goal, one of the most important tasks is to compute efficiently and accurately the (Fréchet) sensitivity kernels of finite-frequency seismic observables such as traveltime and amplitude to the perturbations of model parameters. In scattering-integral approach, the Fréchet kernels are expressed in terms of the strain Green tensors (SGTs), and a pre-established SGT database is necessary to achieve practical efficiency for a three-dimensional reference model in which the SGTs must be calculated numerically. Methods for computing Fréchet kernels for seismic velocities have long been established. In this study, we develop algorithms based on the finite-difference method for calculating Fréchet kernels for the quality factor Qμ and seismic boundary topography. Kernels for the quality factor can be obtained in a way similar to those for seismic velocities with the help of the Hilbert transform. The effects of seismic velocities and quality factor on either traveltime or amplitude are coupled. Kernels for boundary topography involve spatial gradient of the SGTs and they also exhibit interesting finite-frequency characteristics. Examples of quality factor and boundary topography kernels will be shown for a realistic model for the Taiwan region with three-dimensional velocity variation as well as surface and Moho discontinuity topography.

  9. Portal scatter to primary dose ratio of 4 to 18 MV photon spectra incident on heterogeneous phantoms

    NASA Astrophysics Data System (ADS)

    Ozard, Siobhan R.

    Electronic portal imagers designed and used to verify the positioning of a cancer patient undergoing radiation treatment can also be employed to measure the in vivo dose received by the patient. This thesis investigates the ratio of the dose from patient-scattered particles to the dose from primary (unscattered) photons at the imaging plane, called the scatter to primary dose ratio (SPR). The composition of the SPR according to the origin of scatter is analyzed more thoroughly than in previous studies. A new analytical method for calculating the SPR is developed and experimentally verified for heterogeneous phantoms. A novel technique that applies the analytical SPR method for in vivo dosimetry with a portal imager is evaluated. Monte Carlo simulation was used to determine the imager dose from patient-generated electrons and photons that scatter one or more times within the object. The database of SPRs reported from this investigation is new since the contribution from patient-generated electrons was neglected by previous Monte Carlo studies. The SPR from patient-generated electrons was found here to be as large as 0.03. The analytical SPR method relies on the established result that the scatter dose is uniform for an air gap between the patient and the imager that is greater than 50 cm. This method also applies the hypothesis that first-order Compton scatter only, is sufficient for scatter estimation. A comparison of analytical and measured SPRs for neck, thorax, and pelvis phantoms showed that the maximum difference was within +/-0.03, and the mean difference was less than +/-0.01 for most cases. This accuracy was comparable to similar analytical approaches that are limited to homogeneous phantoms. The analytical SPR method could replace lookup tables of measured scatter doses that can require significant time to measure. In vivo doses were calculated by combining our analytical SPR method and the convolution/superposition algorithm. Our calculated in vivo doses agreed within +/-3% with the doses measured in the phantom. The present in vivo method was faster compared to other techniques that use convolution/superposition. Our method is a feasible and satisfactory approach that contributes to on-line patient dose monitoring.

  10. The importance of coherence in inverse problems in optics

    NASA Astrophysics Data System (ADS)

    Ferwerda, H. A.; Baltes, H. P.; Glass, A. S.; Steinle, B.

    1981-12-01

    Current inverse problems of statistical optics are presented with a guide to relevant literature. The inverse problems are categorized into four groups, and the Van Cittert-Zernike theorem and its generalization are discussed. The retrieval of structural information from the far-zone degree of coherence and the time-averaged intensity distribution of radiation scattered by a superposition of random and periodic scatterers are also discussed. In addition, formulas for the calculation of far-zone properties are derived within the framework of scalar optics, and results are applied to two examples.

  11. Limits on transverse momentum dependent evolution from semi-inclusive deep inelastic scattering at moderate Q

    NASA Astrophysics Data System (ADS)

    Aidala, C. A.; Field, B.; Gamberg, L. P.; Rogers, T. C.

    2014-05-01

    In the QCD evolution of transverse momentum dependent parton distribution and fragmentation functions, the Collins-Soper evolution kernel includes both a perturbative short-distance contribution and a large-distance nonperturbative, but strongly universal, contribution. In the past, global fits, based mainly on larger Q Drell-Yan-like processes, have found substantial contributions from nonperturbative regions in the Collins-Soper evolution kernel. In this article, we investigate semi-inclusive deep inelastic scattering measurements in the region of relatively small Q, of the order of a few GeV, where sensitivity to nonperturbative transverse momentum dependence may become more important or even dominate the evolution. Using recently available deep inelastic scattering data from the COMPASS experiment, we provide estimates of the regions of coordinate space that dominate in transverse momentum dependent (TMD) processes when the hard scale is of the order of only a few GeV. We find that distance scales that are much larger than those commonly probed in large Q measurements become important, suggesting that the details of nonperturbative effects in TMD evolution are especially significant in the region of intermediate Q. We highlight the strongly universal nature of the nonperturbative component of evolution and its potential to be tightly constrained by fits from a wide variety of observables that include both large and moderate Q. On this basis, we recommend detailed treatments of the nonperturbative component of the Collins-Soper evolution kernel for future TMD studies.

  12. A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.

    PubMed

    Bartzsch, Stefan; Oelfke, Uwe

    2013-11-01

    The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.

  13. Position Accuracy of Gold Nanoparticles on DNA Origami Structures Studied with Small-Angle X-ray Scattering.

    PubMed

    Hartl, Caroline; Frank, Kilian; Amenitsch, Heinz; Fischer, Stefan; Liedl, Tim; Nickel, Bert

    2018-04-11

    DNA origami objects allow for accurate positioning of guest molecules in three dimensions. Validation and understanding of design strategies for particle attachment as well as analysis of specific particle arrangements are desirable. Small-angle X-ray scattering (SAXS) is suited to probe distances of nano-objects with subnanometer resolution at physiologically relevant conditions including pH and salt and at varying temperatures. Here, we show that the pair density distribution function (PDDF) obtained from an indirect Fourier transform of SAXS intensities in a model-free way allows to investigate prototypical DNA origami-mediated gold nanoparticle (AuNP) assemblies. We analyze the structure of three AuNP-dimers on a DNA origami block, an AuNP trimer constituted by those dimers, and a helical arrangement of nine AuNPs on a DNA origami cylinder. For the dimers, we compare the model-free PDDF and explicit modeling of the SAXS intensity data by superposition of scattering intensities of the scattering objects. The PDDF of the trimer is verified to be a superposition of its dimeric contributions, that is, here AuNP-DNA origami assemblies were used as test boards underlining the validity of the PDDF analysis beyond pairs of AuNPs. We obtain information about AuNP distances with an uncertainty margin of 1.2 nm. This readout accuracy in turn can be used for high precision placement of AuNP by careful design of the AuNP attachment sites on the DNA-structure and by fine-tuning of the connector types.

  14. Scattering of Gaussian Beams by Disordered Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.

    2016-01-01

    A frequently observed characteristic of electromagnetic scattering by a disordered particulate medium is the absence of pronounced speckles in angular patterns of the scattered light. It is known that such diffuse speckle-free scattering patterns can be caused by averaging over randomly changing particle positions and/or over a finite spectral range. To get further insight into the possible physical causes of the absence of speckles, we use the numerically exact superposition T-matrix solver of the Maxwell equations and analyze the scattering of plane-wave and Gaussian beams by representative multi-sphere groups. We show that phase and amplitude variations across an incident Gaussian beam do not serve to extinguish the pronounced speckle pattern typical of plane-wave illumination of a fixed multi-particle group. Averaging over random particle positions and/or over a finite spectral range is still required to generate the classical diffuse speckle-free regime.

  15. SU-E-T-439: An Improved Formula of Scatter-To-Primary Ratio for Photon Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T

    2014-06-01

    Purpose: Scatter-to-primary ratio (SPR) is an important dosimetric quantity that describes the contribution from the scatter photons in an external photon beam. The purpose of this study is to develop an improved analytical formula to describe SPR as a function of circular field size (r) and depth (d) using Monte Carlo (MC) simulation. Methods: MC simulation was performed for Mohan photon spectra (Co-60, 4, 6, 10, 15, 23 MV) using EGSNRC code. Point-spread scatter dose kernels in water are generated. The scatter-to-primary ratio (SPR) is also calculated using MC simulation as a function of field size for circular field sizemore » with radius r and depth d. The doses from forward scatter and backscatter photons are calculated using a convolution of the point-spread scatter dose kernel and by accounting for scatter photons contributing to dose before (z'd) reaching the depth of interest, d, where z' is the location of scatter photons, respectively. The depth dependence of the ratio of the forward scatter and backscatter doses is determined as a function of depth and field size. Results: We are able to improve the existing 3-parameter (a, w, d0) empirical formula for SPR by introducing depth dependence for one of the parameter d0, which becomes 0 for deeper depths. The depth dependence of d0 can be directly calculated as a ratio of backscatter-to-forward scatter doses for otherwise the same field and depth. With the improved empirical formula, we can fit SPR for all megavoltage photon beams to within 2%. Existing 3-parameter formula cannot fit SPR data for Co-60 to better than 3.1%. Conclusion: An improved empirical formula is developed to fit SPR for all megavoltage photon energies to within 2%.« less

  16. A fast numerical solution of scattering by a cylinder: Spectral method for the boundary integral equations

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    It is known that the exact analytic solutions of wave scattering by a circular cylinder, when they exist, are not in a closed form but in infinite series which converges slowly for high frequency waves. In this paper, we present a fast number solution for the scattering problem in which the boundary integral equations, reformulated from the Helmholtz equation, are solved using a Fourier spectral method. It is shown that the special geometry considered here allows the implementation of the spectral method to be simple and very efficient. The present method differs from previous approaches in that the singularities of the integral kernels are removed and dealt with accurately. The proposed method preserves the spectral accuracy and is shown to have an exponential rate of convergence. Aspects of efficient implementation using FFT are discussed. Moreover, the boundary integral equations of combined single and double-layer representation are used in the present paper. This ensures the uniqueness of the numerical solution for the scattering problem at all frequencies. Although a strongly singular kernel is encountered for the Neumann boundary conditions, we show that the hypersingularity can be handled easily in the spectral method. Numerical examples that demonstrate the validity of the method are also presented.

  17. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    PubMed

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  18. Superposition of polarized waves at layered media: theoretical modeling and measurement

    NASA Astrophysics Data System (ADS)

    Finkele, Rolf; Wanielik, Gerd

    1997-12-01

    The detection of ice layers on road surfaces is a crucial requirement for a system that is designed to warn vehicle drivers of hazardous road conditions. In the millimeter wave regime at 76 GHz the dielectric constant of ice and conventional road surface materials (i.e. asphalt, concrete) is found to be nearly similar. Thus, if the layer of ice is very thin and thus is of the same shape of roughness as the underlying road surface it cannot be securely detected using conventional algorithmic approaches. The method introduced in this paper extents and applies the theoretical work of Pancharatnam on the superposition of polarized waves. The projection of the Stokes vectors onto the Poincare sphere traces a circle due to the variation of the thickness of the ice layer. The paper presents a method that utilizes the concept of wave superposition to detect this trace even if it is corrupted by stochastic variation due to rough surface scattering. Measurement results taken under real traffic conditions prove the validity of the proposed algorithms. Classification results are presented and the results discussed.

  19. An efficient method to determine double Gaussian fluence parameters in the eclipse™ proton pencil beam model.

    PubMed

    Shen, Jiajian; Liu, Wei; Stoker, Joshua; Ding, Xiaoning; Anand, Aman; Hu, Yanle; Herman, Michael G; Bues, Martin

    2016-12-01

    To find an efficient method to configure the proton fluence for a commercial proton pencil beam scanning (PBS) treatment planning system (TPS). An in-water dose kernel was developed to mimic the dose kernel of the pencil beam convolution superposition algorithm, which is part of the commercial proton beam therapy planning software, eclipse™ (Varian Medical Systems, Palo Alto, CA). The field size factor (FSF) was calculated based on the spot profile reconstructed by the in-house dose kernel. The workflow of using FSFs to find the desirable proton fluence is presented. The in-house derived spot profile and FSF were validated by a direct comparison with those calculated by the eclipse TPS. The validation included 420 comparisons of the FSFs from 14 proton energies, various field sizes from 2 to 20 cm and various depths from 20% to 80% of proton range. The relative in-water lateral profiles between the in-house calculation and the eclipse TPS agree very well even at the level of 10 -4 . The FSFs between the in-house calculation and the eclipse TPS also agree well. The maximum deviation is within 0.5%, and the standard deviation is less than 0.1%. The authors' method significantly reduced the time to find the desirable proton fluences of the clinical energies. The method is extensively validated and can be applied to any proton centers using PBS and the eclipse TPS.

  20. Three-Dimensional Sensitivity Kernels of Z/H Amplitude Ratios of Surface and Body Waves

    NASA Astrophysics Data System (ADS)

    Bao, X.; Shen, Y.

    2017-12-01

    The ellipticity of Rayleigh wave particle motion, or Z/H amplitude ratio, has received increasing attention in inversion for shallow Earth structures. Previous studies of the Z/H ratio assumed one-dimensional (1D) velocity structures beneath the receiver, ignoring the effects of three-dimensional (3D) heterogeneities on wave amplitudes. This simplification may introduce bias in the resulting models. Here we present 3D sensitivity kernels of the Z/H ratio to Vs, Vp, and density perturbations, based on finite-difference modeling of wave propagation in 3D structures and the scattering-integral method. Our full-wave approach overcomes two main issues in previous studies of Rayleigh wave ellipticity: (1) the finite-frequency effects of wave propagation in 3D Earth structures, and (2) isolation of the fundamental mode Rayleigh waves from Rayleigh wave overtones and converted Love waves. In contrast to the 1D depth sensitivity kernels in previous studies, our 3D sensitivity kernels exhibit patterns that vary with azimuths and distances to the receiver. The laterally-summed 3D sensitivity kernels and 1D depth sensitivity kernels, based on the same homogeneous reference model, are nearly identical with small differences that are attributable to the single period of the 1D kernels and a finite period range of the 3D kernels. We further verify the 3D sensitivity kernels by comparing the predictions from the kernels with the measurements from numerical simulations of wave propagation for models with various small-scale perturbations. We also calculate and verify the amplitude kernels for P waves. This study shows that both Rayleigh and body wave Z/H ratios provide vertical and lateral constraints on the structure near the receiver. With seismic arrays, the 3D kernels afford a powerful tool to use the Z/H ratios to obtain accurate and high-resolution Earth models.

  1. Electromagnetic Scattering by Fully Ordered and Quasi-Random Rigid Particulate Samples

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2016-01-01

    In this paper we have analyzed circumstances under which a rigid particulate sample can behave optically as a true discrete random medium consisting of particles randomly moving relative to each other during measurement. To this end, we applied the numerically exact superposition T-matrix method to model far-field scattering characteristics of fully ordered and quasi-randomly arranged rigid multiparticle groups in fixed and random orientations. We have shown that, in and of itself, averaging optical observables over movements of a rigid sample as a whole is insufficient unless it is combined with a quasi-random arrangement of the constituent particles in the sample. Otherwise, certain scattering effects typical of discrete random media (including some manifestations of coherent backscattering) may not be accurately replicated.

  2. Scattering Properties of Heterogeneous Mineral Particles with Absorbing Inclusions

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2015-01-01

    We analyze the results of numerically exact computer modeling of scattering and absorption properties of randomly oriented poly-disperse heterogeneous particles obtained by placing microscopic absorbing grains randomly on the surfaces of much larger spherical mineral hosts or by imbedding them randomly inside the hosts. These computations are paralleled by those for heterogeneous particles obtained by fully encapsulating fractal-like absorbing clusters in the mineral hosts. All computations are performed using the superposition T-matrix method. In the case of randomly distributed inclusions, the results are compared with the outcome of Lorenz-Mie computations for an external mixture of the mineral hosts and absorbing grains. We conclude that internal aggregation can affect strongly both the integral radiometric and differential scattering characteristics of the heterogeneous particle mixtures.

  3. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    NASA Astrophysics Data System (ADS)

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-05-01

    Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach's feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method.

  4. Super-resolution fusion of complementary panoramic images based on cross-selection kernel regression interpolation.

    PubMed

    Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu

    2014-03-20

    A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.

  5. Plum pudding random medium model of biological tissue toward remote microscopy from spectroscopic light scattering

    PubMed Central

    Xu, Min

    2017-01-01

    Biological tissue has a complex structure and exhibits rich spectroscopic behavior. There has been no tissue model until now that has been able to account for the observed spectroscopy of tissue light scattering and its anisotropy. Here we present, for the first time, a plum pudding random medium (PPRM) model for biological tissue which succinctly describes tissue as a superposition of distinctive scattering structures (plum) embedded inside a fractal continuous medium of background refractive index fluctuation (pudding). PPRM faithfully reproduces the wavelength dependence of tissue light scattering and attributes the “anomalous” trend in the anisotropy to the plum and the powerlaw dependence of the reduced scattering coefficient to the fractal scattering pudding. Most importantly, PPRM opens up a novel venue of quantifying the tissue architecture and microscopic structures on average from macroscopic probing of the bulk with scattered light alone without tissue excision. We demonstrate this potential by visualizing the fine microscopic structural alterations in breast tissue (adipose, glandular, fibrocystic, fibroadenoma, and ductal carcinoma) deduced from noncontact spectroscopic measurement. PMID:28663913

  6. Preliminary scattering kernels for ethane and triphenylmethane at cryogenic temperatures

    NASA Astrophysics Data System (ADS)

    Cantargi, F.; Granada, J. R.; Damián, J. I. Márquez

    2017-09-01

    Two potential cold moderator materials were studied: ethane and triphenylmethane. The first one, ethane (C2H6), is an organic compound which is very interesting from the neutronic point of view, in some respects better than liquid methane to produce subthermal neutrons, not only because it remains in liquid phase through a wider temperature range (Tf = 90.4 K, Tb = 184.6 K), but also because of its high protonic density together with its frequency spectrum with a low rotational energy band. Another material, Triphenylmethane is an hydrocarbon with formula C19H16 which has already been proposed as a good candidate for a cold moderator. Following one of the main research topics of the Neutron Physics Department of Centro Atómico Bariloche, we present here two ways to estimate the frequency spectrum which is needed to feed the NJOY nuclear data processing system in order to generate the scattering law of each desired material. For ethane, computer simulations of molecular dynamics were done, while for triphenylmethane existing experimental and calculated data were used to produce a new scattering kernel. With these models, cross section libraries were generated, and applied to neutron spectra calculation.

  7. Low-energy Auger electron diffraction: influence of multiple scattering and angular momentum

    NASA Astrophysics Data System (ADS)

    Chassé, A.; Niebergall, L.; Kucherenko, Yu.

    2002-04-01

    The angular dependence of Auger electrons excited from single-crystal surfaces is treated theoretically within a multiple-scattering cluster model taking into account the full Auger transition matrix elements. In particular the model has been used to discuss the influence of multiple scattering and angular momentum of the Auger electron wave on Auger electron diffraction (AED) patterns in the region of low kinetic energies. Theoretical results of AED patterns are shown and discussed in detail for Cu(0 0 1) and Ni(0 0 1) surfaces, respectively. Even though Cu and Ni are very similar in their electronic and scattering properties recently strong differences have been found in AED patterns measured in the low-energy region. It is shown that the differences may be caused to superposition of different electron diffraction effects in an energy-integrated experiment. A good agreement between available experimental and theoretical results has been achieved.

  8. Quantum scattering beyond the plane-wave approximation

    NASA Astrophysics Data System (ADS)

    Karlovets, Dmitry

    2017-12-01

    While a plane-wave approximation in high-energy physics works well in a majority of practical cases, it becomes inapplicable for scattering of the vortex particles carrying orbital angular momentum, of Airy beams, of the so-called Schrödinger cat states, and their generalizations. Such quantum states of photons, electrons and neutrons have been generated experimentally in recent years, opening up new perspectives in quantum optics, electron microscopy, particle physics, and so forth. Here we discuss the non-plane-wave effects in scattering brought about by the novel quantum numbers of these wave packets. For the well-focused electrons of intermediate energies, already available at electron microscopes, the corresponding contribution can surpass that of the radiative corrections. Moreover, collisions of the cat-like superpositions of such focused beams with atoms allow one to probe effects of the quantum interference, which have never played any role in particle scattering.

  9. The loss rates of O+ in the inner magnetosphere caused by both magnetic field line curvature scattering and charge exchange reactions

    NASA Astrophysics Data System (ADS)

    Ji, Y.; Shen, C.

    2014-03-01

    With consideration of magnetic field line curvature (FLC) pitch angle scattering and charge exchange reactions, the O+ (>300 keV) in the inner magnetosphere loss rates are investigated by using an eigenfunction analysis. The FLC scattering provides a mechanism for the ring current O+ to enter the loss cone and influence the loss rates caused by charge exchange reactions. Assuming that the pitch angle change is small for each scattering event, the diffusion equation including a charge exchange term is constructed and solved; the eigenvalues of the equation are identified. The resultant loss rates of O+ are approximately equal to the linear superposition of the loss rate without considering the charge exchange reactions and the loss rate associated with charge exchange reactions alone. The loss time is consistent with the observations from the early recovery phases of magnetic storms.

  10. Surface Parameters of Titan Feature Classes From Cassini RADAR Backscatter Measurements

    NASA Astrophysics Data System (ADS)

    Wye, L. C.; Zebker, H. A.; Lopes, R. M.; Peckyno, R.; Le Gall, A.; Janssen, M. A.

    2008-12-01

    Multimode microwave measurements collected by the Cassini RADAR instrument during the spacecraft's first four years of operation form a fairly comprehensive set of radar backscatter data over a variety of Titan surface features. We use the real-aperture scatterometry processor to analyze the entire collection of active data, creating a uniformly-calibrated dataset that covers 93% of Titan's surface at a variety of viewing angles. Here, we examine how the measured backscatter response (radar reflectivity as a function of incidence angle) varies with surface feature type, such as dunes, cryovolcanic areas, and anomalous albedo terrain. We identify the feature classes using a combination of maps produced by the RADAR, ISS, and VIMS instruments. We then derive surface descriptors including roughness, dielectric constant, and degree of volume scatter. Radar backscatter on Titan is well-modeled as a superposition of large-scale surface scattering (quasispecular scattering) together with a combination of small-scale surface scattering and subsurface volume scattering (diffuse scattering). The viewing geometry determines which scattering mechanism is strongest. At low incidence angles, quasispecular scatter dominates the radar backscatter return. At higher incidence angles (angles greater than ~30°), diffuse scatter dominates the return. We use a composite model to separate the two scattering regimes; we model the quasispecular term with a combination of two traditional backscatter laws (we consider the Hagfors, Gaussian, and exponential models), following a technique developed by Sultan-Salem and Tyler [1], and we model the diffuse term, which encompasses both diffuse mechanisms, with a simple cosine power law. Using this total composite model, we analyze the backscatter curves of all features classes on Titan for which we have adequate angular coverage. In most cases, we find that the superposition of the Hagfors law with the exponential law best models the quasispecular response. A generalized geometric optics approach permits us to combine the best-fit parameters from each component of the composite model to yield a single value for the surface dielectric constant and RMS slope [1]. In this way, we map the relative variation of composition and wavelength-scale structure across the surface. We also map the variation of radar albedo across the analyzed features, as well as the relative prevalence of the different scattering mechanisms through the measured ratio of diffuse power to quasispecular power. These map products help to constrain how different geological processes might be interacting on a global scale. [1] A. K. Sultan-Salem, G. L. Tyler, JGR 112, 2007.

  11. Characterizing Intimate Mixtures of Materials in Hyperspectral Imagery with Albedo-based and Kernel-based Approaches

    DTIC Science & Technology

    2015-09-01

    scattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed...following Hapke 9 (1993); and Mustard and Pieters 18 (1987)) assuming the reflectance spectra are bidirectional . SSA spectra were also generated...from AVIRIS data collected during a JPL/USGS campaign in response to the Deep Water Horizon (DWH) oil spill incident. 27 Out of the numerous

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackowski, Daniel W.; Mishchenko, Michael I.

    The conventional orientation-averaging procedure developed in the framework of the superposition T-matrix approach is generalized to include the case of illumination by a Gaussian beam (GB). The resulting computer code is parallelized and used to perform extensive numerically exact calculations of electromagnetic scattering by volumes of discrete random medium consisting of monodisperse spherical particles. The size parameters of the scattering volumes are 40, 50, and 60, while their packing density is fixed at 5%. We demonstrate that all scattering patterns observed in the far-field zone of a random multisphere target and their evolution with decreasing width of the incident GBmore » can be interpreted in terms of idealized theoretical concepts such as forward-scattering interference, coherent backscattering (CB), and diffuse multiple scattering. It is shown that the increasing violation of electromagnetic reciprocity with decreasing GB width suppresses and eventually eradicates all observable manifestations of CB. This result supplements the previous demonstration of the effects of broken reciprocity in the case of magneto-optically active particles subjected to an external magnetic field.« less

  13. FastMag: Fast micromagnetic simulator for complex magnetic structures (invited)

    NASA Astrophysics Data System (ADS)

    Chang, R.; Li, S.; Lubarda, M. V.; Livshitz, B.; Lomakin, V.

    2011-04-01

    A fast micromagnetic simulator (FastMag) for general problems is presented. FastMag solves the Landau-Lifshitz-Gilbert equation and can handle multiscale problems with a high computational efficiency. The simulator derives its high performance from efficient methods for evaluating the effective field and from implementations on massively parallel graphics processing unit (GPU) architectures. FastMag discretizes the computational domain into tetrahedral elements and therefore is highly flexible for general problems. The magnetostatic field is computed via the superposition principle for both volume and surface parts of the computational domain. This is accomplished by implementing efficient quadrature rules and analytical integration for overlapping elements in which the integral kernel is singular. Thus, discretized superposition integrals are computed using a nonuniform grid interpolation method, which evaluates the field from N sources at N collocated observers in O(N) operations. This approach allows handling objects of arbitrary shape, allows easily calculating of the field outside the magnetized domains, does not require solving a linear system of equations, and requires little memory. FastMag is implemented on GPUs with ?> GPU-central processing unit speed-ups of 2 orders of magnitude. Simulations are shown of a large array of magnetic dots and a recording head fully discretized down to the exchange length, with over a hundred million tetrahedral elements on an inexpensive desktop computer.

  14. The Penrose photoproduction scenario for NGC 4151: A black hole gamma-ray emission mechanism for active galactic nuclei and Seyfert galaxies. [Compton scattering and pair production

    NASA Technical Reports Server (NTRS)

    Leiter, D.

    1979-01-01

    A consistent theoretical interpretation is given for the suggestion that a steepening of the spectrum between X-ray and gamma ray energies may be a general, gamma-ray characteristic of Seyfert galaxies, if the diffuse gamma ray spectrum is considered to be a superposition of unresolved contributions, from one or more classes of extragalactic objects. In the case of NGC 4151, the dominant process is shown to be Penrose Compton scattering in the ergosphere of a Kerr black hole, assumed to exist in the Seyfert's active galactic nucleus.

  15. Concentric layered Hermite scatterers

    NASA Astrophysics Data System (ADS)

    Astheimer, Jeffrey P.; Parker, Kevin J.

    2018-05-01

    The long wavelength limit of scattering from spheres has a rich history in optics, electromagnetics, and acoustics. Recently it was shown that a common integral kernel pertains to formulations of weak spherical scatterers in both acoustics and electromagnetic regimes. Furthermore, the relationship between backscattered amplitude and wavenumber k was shown to follow power laws higher than the Rayleigh scattering k2 power law, when the inhomogeneity had a material composition that conformed to a Gaussian weighted Hermite polynomial. Although this class of scatterers, called Hermite scatterers, are plausible, it may be simpler to manufacture scatterers with a core surrounded by one or more layers. In this case the inhomogeneous material property conforms to a piecewise continuous constant function. We demonstrate that the necessary and sufficient conditions for supra-Rayleigh scattering power laws in this case can be stated simply by considering moments of the inhomogeneous function and its spatial transform. This development opens an additional path for construction of, and use of scatterers with unique power law behavior.

  16. Direct Simulation of Multiple Scattering by Discrete Random Media Illuminated by Gaussian Beams

    NASA Technical Reports Server (NTRS)

    Mackowski, Daniel W.; Mishchenko, Michael I.

    2011-01-01

    The conventional orientation-averaging procedure developed in the framework of the superposition T-matrix approach is generalized to include the case of illumination by a Gaussian beam (GB). The resulting computer code is parallelized and used to perform extensive numerically exact calculations of electromagnetic scattering by volumes of discrete random medium consisting of monodisperse spherical particles. The size parameters of the scattering volumes are 40, 50, and 60, while their packing density is fixed at 5%. We demonstrate that all scattering patterns observed in the far-field zone of a random multisphere target and their evolution with decreasing width of the incident GB can be interpreted in terms of idealized theoretical concepts such as forward-scattering interference, coherent backscattering (CB), and diffuse multiple scattering. It is shown that the increasing violation of electromagnetic reciprocity with decreasing GB width suppresses and eventually eradicates all observable manifestations of CB. This result supplements the previous demonstration of the effects of broken reciprocity in the case of magneto-optically active particles subjected to an external magnetic field.

  17. Rapid computation of the amplitude and phase of tightly focused optical fields distorted by scattering particles

    PubMed Central

    Ranasinghesagara, Janaka C.; Hayakawa, Carole K.; Davis, Mitchell A.; Dunn, Andrew K.; Potma, Eric O.; Venugopalan, Vasan

    2014-01-01

    We develop an efficient method for accurately calculating the electric field of tightly focused laser beams in the presence of specific configurations of microscopic scatterers. This Huygens–Fresnel wave-based electric field superposition (HF-WEFS) method computes the amplitude and phase of the scattered electric field in excellent agreement with finite difference time-domain (FDTD) solutions of Maxwell’s equations. Our HF-WEFS implementation is 2–4 orders of magnitude faster than the FDTD method and enables systematic investigations of the effects of scatterer size and configuration on the focal field. We demonstrate the power of the new HF-WEFS approach by mapping several metrics of focal field distortion as a function of scatterer position. This analysis shows that the maximum focal field distortion occurs for single scatterers placed below the focal plane with an offset from the optical axis. The HF-WEFS method represents an important first step toward the development of a computational model of laser-scanning microscopy of thick cellular/tissue specimens. PMID:25121440

  18. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Tian, Z; Song, T

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less

  19. Quantitative assessment of scatter correction techniques incorporated in next generation dual-source computed tomography

    NASA Astrophysics Data System (ADS)

    Mobberley, Sean David

    Accurate, cross-scanner assessment of in-vivo air density used to quantitatively assess amount and distribution of emphysema in COPD subjects has remained elusive. Hounsfield units (HU) within tracheal air can be considerably more positive than -1000 HU. With the advent of new dual-source scanners which employ dedicated scatter correction techniques, it is of interest to evaluate how the quantitative measures of lung density compare between dual-source and single-source scan modes. This study has sought to characterize in-vivo and phantom-based air metrics using dual-energy computed tomography technology where the nature of the technology has required adjustments to scatter correction. Anesthetized ovine (N=6), swine (N=13: more human-like rib cage shape), lung phantom and a thoracic phantom were studied using a dual-source MDCT scanner (Siemens Definition Flash. Multiple dual-source dual-energy (DSDE) and single-source (SS) scans taken at different energy levels and scan settings were acquired for direct quantitative comparison. Density histograms were evaluated for the lung, tracheal, water and blood segments. Image data were obtained at 80, 100, 120, and 140 kVp in the SS mode (B35f kernel) and at 80, 100, 140, and 140-Sn (tin filtered) kVp in the DSDE mode (B35f and D30f kernels), in addition to variations in dose, rotation time, and pitch. To minimize the effect of cross-scatter, the phantom scans in the DSDE mode was obtained by reducing the tube current of one of the tubes to its minimum (near zero) value. When using image data obtained in the DSDE mode, the median HU values in the tracheal regions of all animals and the phantom were consistently closer to -1000 HU regardless of reconstruction kernel (chapters 3 and 4). Similarly, HU values of water and blood were consistently closer to their nominal values of 0 HU and 55 HU respectively. When using image data obtained in the SS mode the air CT numbers demonstrated a consistent positive shift of up to 35 HU with respect to the nominal -1000 HU value. In vivo data demonstrated considerable variability in tracheal, influenced by local anatomy with SS mode scanning while tracheal air was more consistent with DSDE imaging. Scatter effects in the lung parenchyma differed from adjacent tracheal measures. In summary, data suggest that enhanced scatter correction serves to provide more accurate CT lung density measures sought to quantitatively assess the presence and distribution of emphysema in COPD subjects. Data further suggest that CT images, acquired without adequate scatter correction, cannot be corrected by linear algorithms given the variability in tracheal air HU values and the independent scatter effects on lung parenchyma.

  20. [Crop geometry identification based on inversion of semiempirical BRDF models].

    PubMed

    Huang, Wen-jiang; Wang, Jin-di; Mu, Xi-han; Wang, Ji-hua; Liu, Liang-yun; Liu, Qiang; Niu, Zheng

    2007-10-01

    Investigations have been made on identification of erective and horizontal varieties by bidirectional canopy reflected spectrum and semi-empirical bidirectional reflectance distribution function (BRDF) models. The qualitative effect of leaf area index (LAI) and average leaf angle (ALA) on crop canopy reflected spectrum was studied. The structure parameter sensitive index (SPEI) based on the weight for the volumetric kernel (fvol), the weight for the geometric kernel (fgeo), and the weight for constant corresponding to isotropic reflectance (fiso), was defined in the present study for crop geometry identification. However, the weights associated with the kernels of semi-empirical BRDF model do not have a direct relationship with measurable biophysical parameters. Therefore, efforts have focused on trying to find the relation between these semi-empirical BRDF kernel weights and various vegetation structures. SPEI was proved to be more sensitive to identify crop geometry structures than structural scattering index (SSI) and normalized difference f-index (NDFI), SPEI could be used to distinguish erective and horizontal geometry varieties. So, it is feasible to identify horizontal and erective varieties of wheat by bidirectional canopy reflected spectrum.

  1. Visualization of Oil Body Distribution in Jatropha curcas L. by Four-Wave Mixing Microscopy

    NASA Astrophysics Data System (ADS)

    Ishii, Makiko; Uchiyama, Susumu; Ozeki, Yasuyuki; Kajiyama, Sin'ichiro; Itoh, Kazuyoshi; Fukui, Kiichi

    2013-06-01

    Jatropha curcas L. (jatropha) is a superior oil crop for biofuel production. To improve the oil yield of jatropha by breeding, the development of effective and reliable tools to evaluate the oil production efficiency is essential. The characteristics of the jatropha kernel, which contains a large amount of oil, are not fully understood yet. Here, we demonstrate the application of four-wave mixing (FWM) microscopy to visualize the distribution of oil bodies in a jatropha kernel without staining. FWM microscopy enables us to visualize the size and morphology of oil bodies and to determine the oil content in the kernel to be 33.2%. The signal obtained from FWM microscopy comprises both of stimulated parametric emission (SPE) and coherent anti-Stokes Raman scattering (CARS) signals. In the present situation, where a very short pump pulse is employed, the SPE signal is believed to dominate the FWM signal.

  2. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    PubMed Central

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-01-01

    Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120

  3. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brull, S., E-mail: Stephane.Brull@math.u-bordeaux.fr; Charrier, P., E-mail: Pierre.Charrier@math.u-bordeaux.fr; Mieussens, L., E-mail: Luc.Mieussens@math.u-bordeaux.fr

    It is well known that the roughness of the wall has an effect on microscale gas flows. This effect can be shown for large Knudsen numbers by using a numerical solution of the Boltzmann equation. However, when the wall is rough at a nanometric scale, it is necessary to use a very small mesh size which is much too expansive. An alternative approach is to incorporate the roughness effect in the scattering kernel of the boundary condition, such as the Maxwell-like kernel introduced by the authors in a previous paper. Here, we explain how this boundary condition can be implementedmore » in a discrete velocity approximation of the Boltzmann equation. Moreover, the influence of the roughness is shown by computing the structure scattering pattern of mono-energetic beams of the incident gas molecules. The effect of the angle of incidence of these molecules, of their mass, and of the morphology of the wall is investigated and discussed in a simplified two-dimensional configuration. The effect of the azimuthal angle of the incident beams is shown for a three-dimensional configuration. Finally, the case of non-elastic scattering is considered. All these results suggest that our approach is a promising way to incorporate enough physics of gas-surface interaction, at a reasonable computing cost, to improve kinetic simulations of micro- and nano-flows.« less

  5. Implementation of radiation shielding calculation methods. Volume 2: Seminar/Workshop notes

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    Detailed descriptions are presented of the input data for each of the MSFC computer codes applied to the analysis of a realistic nuclear propelled vehicle. The analytical techniques employed include cross section data, preparation, one and two dimensional discrete ordinates transport, point kernel, and single scatter methods.

  6. S-Wave Dispersion Relations: Exact Left Hand E-Plane Discontinuity from the Born Series

    NASA Technical Reports Server (NTRS)

    Bessis, D.; Temkin, A.

    1999-01-01

    We show, for a superposition of Yukawa potentials, that the left hand cut discontinuity in the complex E plane of the (S-wave) scattering amplitude is given exactly, in an interval depending on n, by the discontinuity of the Born series stopped at order n. This also establishes an inverse and unexpected correspondence of the Born series at positive high energies and negative low energies. We can thus construct a viable dispersion relation (DR) for the partial (S-) wave amplitude. The high numerical precision achievable by the DR is demonstrated for the exponential potential at zero scattering energy. We also briefly discuss the extension of our results to Field Theory.

  7. Demonstration of Numerical Equivalence of Ensemble and Spectral Averaging in Electromagnetic Scattering by Random Particulate Media

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.

    2016-01-01

    The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.

  8. Numerical techniques in radiative heat transfer for general, scattering, plane-parallel media

    NASA Technical Reports Server (NTRS)

    Sharma, A.; Cogley, A. C.

    1982-01-01

    The study of radiative heat transfer with scattering usually leads to the solution of singular Fredholm integral equations. The present paper presents an accurate and efficient numerical method to solve certain integral equations that govern radiative equilibrium problems in plane-parallel geometry for both grey and nongrey, anisotropically scattering media. In particular, the nongrey problem is represented by a spectral integral of a system of nonlinear integral equations in space, which has not been solved previously. The numerical technique is constructed to handle this unique nongrey governing equation as well as the difficulties caused by singular kernels. Example problems are solved and the method's accuracy and computational speed are analyzed.

  9. Implementation of a small-angle scattering model in MCNPX for very cold neutron reflector studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grammer, Kyle B.; Gallmeier, Franz X.

    Current neutron moderator media do not sufficiently moderate neutrons below the cold neutron regime into the very cold neutron (VCN) regime that is desirable for some physics applications. Nesvizhevsky et al [1] have demonstrated that nanodiamond powder efficiently reflect VCN via small angle scattering. He suggests that these effects could be exploited to boost the neutron output of a VCN moderator. Simulation studies of nanoparticle reflectors are being investigated as part of the development of a VCN source option for the SNS second target station. We are pursuing an expansion of the MCNPX code by implementation of an analytical small-anglemore » scattering function [2], which is adaptable by scattering particle sizes, distributions, and packing fractions in order to supplement currently existing scattering kernels. The analytical model and preliminary studies using MCNPX will be discussed.« less

  10. Diffuse correlation tomography in the transport regime: A theoretical study of the sensitivity to Brownian motion.

    PubMed

    Tricoli, Ugo; Macdonald, Callum M; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A

    2018-02-01

    Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.

  11. Diffuse correlation tomography in the transport regime: A theoretical study of the sensitivity to Brownian motion

    NASA Astrophysics Data System (ADS)

    Tricoli, Ugo; Macdonald, Callum M.; Durduran, Turgut; Da Silva, Anabela; Markel, Vadim A.

    2018-02-01

    Diffuse correlation tomography (DCT) uses the electric-field temporal autocorrelation function to measure the mean-square displacement of light-scattering particles in a turbid medium over a given exposure time. The movement of blood particles is here estimated through a Brownian-motion-like model in contrast to ordered motion as in blood flow. The sensitivity kernel relating the measurable field correlation function to the mean-square displacement of the particles can be derived by applying a perturbative analysis to the correlation transport equation (CTE). We derive an analytical expression for the CTE sensitivity kernel in terms of the Green's function of the radiative transport equation, which describes the propagation of the intensity. We then evaluate the kernel numerically. The simulations demonstrate that, in the transport regime, the sensitivity kernel provides sharper spatial information about the medium as compared with the correlation diffusion approximation. Also, the use of the CTE allows one to explore some additional degrees of freedom in the data such as the collimation direction of sources and detectors. Our results can be used to improve the spatial resolution of DCT, in particular, with applications to blood flow imaging in regions where the Brownian motion is dominant.

  12. Symmetry preserving truncations of the gap and Bethe-Salpeter equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Binosi, Daniele; Chang, Lei; Papavassiliou, Joannis

    2016-05-01

    Ward-Green-Takahashi (WGT) identities play a crucial role in hadron physics, e.g. imposing stringent relationships between the kernels of the one-and two-body problems, which must be preserved in any veracious treatment of mesons as bound states. In this connection, one may view the dressed gluon-quark vertex, Gamma(alpha)(mu), as fundamental. We use a novel representation of Gamma(alpha)(mu), in terms of the gluon-quark scattering matrix, to develop a method capable of elucidating the unique quark-antiquark Bethe-Salpeter kernel, K, that is symmetry consistent with a given quark gap equation. A strength of the scheme is its ability to expose and capitalize on graphic symmetriesmore » within the kernels. This is displayed in an analysis that reveals the origin of H-diagrams in K, which are two-particle-irreducible contributions, generated as two-loop diagrams involving the three-gluon vertex, that cannot be absorbed as a dressing of Gamma(alpha)(mu) in a Bethe-Salpeter kernel nor expressed as a member of the class of crossed-box diagrams. Thus, there are no general circumstances under which the WGT identities essential for a valid description of mesons can be preserved by a Bethe-Salpeter kernel obtained simply by dressing both gluon-quark vertices in a ladderlike truncation; and, moreover, adding any number of similarly dressed crossed-box diagrams cannot improve the situation.« less

  13. A shock-capturing SPH scheme based on adaptive kernel estimation

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; López, Hender; Donoso, Arnaldo; Sira, Eloy; Klapp, Jaime

    2006-02-01

    Here we report a method that converts standard smoothed particle hydrodynamics (SPH) into a working shock-capturing scheme without relying on solutions to the Riemann problem. Unlike existing adaptive SPH simulations, the present scheme is based on an adaptive kernel estimation of the density, which combines intrinsic features of both the kernel and nearest neighbor approaches in a way that the amount of smoothing required in low-density regions is effectively controlled. Symmetrized SPH representations of the gas dynamic equations along with the usual kernel summation for the density are used to guarantee variational consistency. Implementation of the adaptive kernel estimation involves a very simple procedure and allows for a unique scheme that handles strong shocks and rarefactions the same way. Since it represents a general improvement of the integral interpolation on scattered data, it is also applicable to other fluid-dynamic models. When the method is applied to supersonic compressible flows with sharp discontinuities, as in the classical one-dimensional shock-tube problem and its variants, the accuracy of the results is comparable, and in most cases superior, to that obtained from high quality Godunov-type methods and SPH formulations based on Riemann solutions. The extension of the method to two- and three-space dimensions is straightforward. In particular, for the two-dimensional cylindrical Noh's shock implosion and Sedov point explosion problems the present scheme produces much better results than those obtained with conventional SPH codes.

  14. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    NASA Astrophysics Data System (ADS)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  15. ASKI: A modular toolbox for scattering-integral-based seismic full waveform inversion and sensitivity analysis utilizing external forward codes

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).

  16. Modeling of Electromagnetic Scattering by Discrete and Discretely Heterogeneous Random Media by Using Numerically Exact Solutions of the Maxwell Equations

    NASA Technical Reports Server (NTRS)

    Dlugach, Janna M.; Mishchenko, Michael I.

    2017-01-01

    In this paper, we discuss some aspects of numerical modeling of electromagnetic scattering by discrete random medium by using numerically exact solutions of the macroscopic Maxwell equations. Typical examples of such media are clouds of interstellar dust, clouds of interplanetary dust in the Solar system, dusty atmospheres of comets, particulate planetary rings, clouds in planetary atmospheres, aerosol particles with numerous inclusions and so on. Our study is based on the results of extensive computations of different characteristics of electromagnetic scattering obtained by using the superposition T-matrix method which represents a direct computer solver of the macroscopic Maxwell equations for an arbitrary multisphere configuration. As a result, in particular, we clarify the range of applicability of the low-density theories of radiative transfer and coherent backscattering as well as of widely used effective-medium approximations.

  17. Optics of Water Cloud Droplets Mixed with Black-Carbon Aerosols

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Liu, Li; Cairns, Brian; Mackowski, Daniel W.

    2014-01-01

    We use the recently extended superposition T-matrix method to calculate scattering and absorption properties of micrometer-sized water droplets contaminated by black carbon. Our numerically exact results reveal that, depending on the mode of soot-water mixing, the soot specific absorption can vary by a factor exceeding 6.5. The specific absorption is maximized when the soot material is quasi-uniformly distributed throughout the droplet interior in the form of numerous small monomers. The range of mixing scenarios captured by our computations implies a wide range of remote sensing and radiation budget implications of the presence of black carbon in liquid-water clouds. We show that the popular Maxwell-Garnett effective-medium approximation can be used to calculate the optical cross sections, single-scattering albedo, and asymmetry parameter for the quasi-uniform mixing scenario, but is likely to fail in application to other mixing scenarios and in computations of the elements of the scattering matrix.

  18. TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A; Zbijewski, W; Stayman, J

    Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less

  19. On the heat trace of Schroedinger operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banuelos, R.; Sa Barreto, A.

    1995-12-31

    Trace formulae for heat kernels of Schroedinger operators have been widely studied in connection with spectral and scattering theory. They have been used to obtain information about a potential from its spectrum, or from its scattering data, and vice-versa. Using elementary Fourier transform methods we obtain a formula for the general coefficient in the asymptotic expansion of the trace of the heat kernel of the Schroedinger operator {minus}{Delta} + V, as t {down_arrow} 0, with V {element_of} S(R{sup n}), the class of functions with rapid decay at infinity. In dimension n = 1 a recurrent formula for the general coefficientmore » in the expansion is obtained in [6]. However the KdV methods used there do not seem to generalize to higher dimension. Using the formula of [6] and the symmetry of some integrals, Y. Colin de Verdiere has computed the first four coefficients for potentials in three space dimensions. Also in [1] a different method is used to compute heat coefficients for differential operators on manifolds. 14 refs.« less

  20. GRAYSKY-A new gamma-ray skyshine code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witts, D.J.; Twardowski, T.; Watmough, M.H.

    1993-01-01

    This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY aremore » as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors.« less

  1. Java application for the superposition T-matrix code to study the optical properties of cosmic dust aggregates

    NASA Astrophysics Data System (ADS)

    Halder, P.; Chakraborty, A.; Deb Roy, P.; Das, H. S.

    2014-09-01

    In this paper, we report the development of a java application for the Superposition T-matrix code, JaSTA (Java Superposition T-matrix App), to study the light scattering properties of aggregate structures. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precession superposition codes for multi-sphere clusters in random orientation developed by Mackowski and Mischenko (1996). It consists of a graphical user interface (GUI) in the front hand and a database of related data in the back hand. Both the interactive GUI and database package directly enable a user to model by self-monitoring respective input parameters (namely, wavelength, complex refractive indices, grain size, etc.) to study the related optical properties of cosmic dust (namely, extinction, polarization, etc.) instantly, i.e., with zero computational time. This increases the efficiency of the user. The database of JaSTA is now created for a few sets of input parameters with a plan to create a large database in future. This application also has an option where users can compile and run the scattering code directly for aggregates in GUI environment. The JaSTA aims to provide convenient and quicker data analysis of the optical properties which can be used in different fields like planetary science, atmospheric science, nano science, etc. The current version of this software is developed for the Linux and Windows platform to study the light scattering properties of small aggregates which will be extended for larger aggregates using parallel codes in future. Catalogue identifier: AETB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 571570 No. of bytes in distributed program, including test data, etc.: 120226886 Distribution format: tar.gz Programming language: Java, Fortran95. Computer: Any Windows or Linux systems capable of hosting a java runtime environment, java3D and fortran95 compiler; Developed on 2.40 GHz Intel Core i3. Operating system: Any Windows or Linux systems capable of hosting a java runtime environment, java3D and fortran95 compiler. RAM: Ranging from a few Mbytes to several Gbytes, depending on the input parameters. Classification: 1.3. External routines: jfreechart-1.0.14 [1] (free plotting library for java), j3d-jre-1.5.2 [2] (3D visualization). Nature of problem: Optical properties of cosmic dust aggregates. Solution method: Java application based on Mackowski and Mischenko's Superposition T-Matrix code. Restrictions: The program is designed for single processor systems. Additional comments: The distribution file for this program is over 120 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Ranging from few minutes to several hours, depending on the input parameters. References: [1] http://www.jfree.org/index.html [2] https://java3d.java.net/

  2. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  3. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  4. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    NASA Astrophysics Data System (ADS)

    Li, Jun; Altschuler, Martin D.; Hahn, Stephen M.; Zhu, Timothy C.

    2008-08-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the results from heterogeneous optical data with those obtained from average homogeneous optical properties. The optimized treatment plans are also compared with the reference clinical plan, defined as the plan with sources of equal strength, distributed regularly in space, which delivers a mean value of prescribed fluence at detector locations within the treatment region. The study suggests that comprehensive optimization of source parameters (i.e. strengths, lengths and locations) is feasible, thus allowing acceptable dose coverage in a heterogeneous prostate PDT within the time constraints of the PDT procedure.

  5. Effect of scattering on coherent anti-Stokes Raman scattering (CARS) signals.

    PubMed

    Ranasinghesagara, Janaka C; De Vito, Giuseppe; Piazza, Vincenzo; Potma, Eric O; Venugopalan, Vasan

    2017-04-17

    We develop a computational framework to examine the factors responsible for scattering-induced distortions of coherent anti-Stokes Raman scattering (CARS) signals in turbid samples. We apply the Huygens-Fresnel wave-based electric field superposition (HF-WEFS) method combined with the radiating dipole approximation to compute the effects of scattering-induced distortions of focal excitation fields on the far-field CARS signal. We analyze the effect of spherical scatterers, placed in the vicinity of the focal volume, on the CARS signal emitted by different objects (2μm diameter solid sphere, 2μm diameter myelin cylinder and 2μm diameter myelin tube). We find that distortions in the CARS signals arise not only from attenuation of the focal field but also from scattering-induced changes in the spatial phase that modifies the angular distribution of the CARS emission. Our simulations further show that CARS signal attenuation can be minimized by using a high numerical aperture condenser. Moreover, unlike the CARS intensity image, CARS images formed by taking the ratio of CARS signals obtained using x- and y-polarized input fields is relatively insensitive to the effects of spherical scatterers. Our computational framework provide a mechanistic approach to characterizing scattering-induced distortions in coherent imaging of turbid media and may inspire bottom-up approaches for adaptive optical methods for image correction.

  6. Effect of scattering on coherent anti-Stokes Raman scattering (CARS) signals

    PubMed Central

    Ranasinghesagara, Janaka C.; De Vito, Giuseppe; Piazza, Vincenzo; Potma, Eric O.; Venugopalan, Vasan

    2017-01-01

    We develop a computational framework to examine the factors responsible for scattering-induced distortions of coherent anti-Stokes Raman scattering (CARS) signals in turbid samples. We apply the Huygens-Fresnel wave-based electric field superposition (HF-WEFS) method combined with the radiating dipole approximation to compute the effects of scattering-induced distortions of focal excitation fields on the far-field CARS signal. We analyze the effect of spherical scatterers, placed in the vicinity of the focal volume, on the CARS signal emitted by different objects (2μm diameter solid sphere, 2μm diameter myelin cylinder and 2μm diameter myelin tube). We find that distortions in the CARS signals arise not only from attenuation of the focal field but also from scattering-induced changes in the spatial phase that modifies the angular distribution of the CARS emission. Our simulations further show that CARS signal attenuation can be minimized by using a high numerical aperture condenser. Moreover, unlike the CARS intensity image, CARS images formed by taking the ratio of CARS signals obtained using x- and y-polarized input fields is relatively insensitive to the effects of spherical scatterers. Our computational framework provide a mechanistic approach to characterizing scattering-induced distortions in coherent imaging of turbid media and may inspire bottom-up approaches for adaptive optical methods for image correction. PMID:28437941

  7. Frequency-domain full-waveform inversion with non-linear descent directions

    NASA Astrophysics Data System (ADS)

    Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.

    2018-05-01

    Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.

  8. Vertical Photon Transport in Cloud Remote Sensing Problems

    NASA Technical Reports Server (NTRS)

    Platnick, S.

    1999-01-01

    Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.

  9. Anisotropic scattering of discrete particle arrays.

    PubMed

    Paul, Joseph S; Fu, Wai Chong; Dokos, Socrates; Box, Michael

    2010-05-01

    Far-field intensities of light scattered from a linear centro-symmetric array illuminated by a plane wave of incident light are estimated at a series of detector angles. The intensities are computed from the superposition of E-fields scattered by the individual array elements. An average scattering phase function is used to model the scattered fields of individual array elements. The nature of scattering from the array is investigated using an image (theta-phi plot) of the far-field intensities computed at a series of locations obtained by rotating the detector angle from 0 degrees to 360 degrees, corresponding to each angle of incidence in the interval [0 degrees 360 degrees]. The diffraction patterns observed from the theta-Phi plot are compared with those for isotropic scattering. In the absence of prior information on the array geometry, the intensities corresponding to theta-Phi pairs satisfying the Bragg condition are used to estimate the phase function. An algorithmic procedure is presented for this purpose and tested using synthetic data. The relative error between estimated and theoretical values of the phase function is shown to be determined by the mean spacing factor, the number of elements, and the far-field distance. An empirical relationship is presented to calculate the optimal far-field distance for a given specification of the percentage error.

  10. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  11. A flexible, extendable, modular and computationally efficient approach to scattering-integral-based seismic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.; Lamara, S.

    2016-02-01

    We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be done using different mathematical approaches. Since kernels are stored on disk, it can be repeated many times for different regularization parameters without need to solve the forward problem, making the approach accessible to Occam's method. Changes of choice of misfit functional, weighting of data and selection of data subsets are still possible at this stage. We have coded our approach to FWI into a program package called ASKI (Analysis of Sensitivity and Kernel Inversion) which can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. It is written in modern FORTRAN language using object-oriented concepts that reflect the modular structure of the inversion procedure. We validate our FWI method by a small-scale synthetic study and present first results of its application to high-quality seismological data acquired in the southern Aegean.

  12. Data mining graphene: Correlative analysis of structure and electronic degrees of freedom in graphenic monolayers with defects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziatdinov, Maxim A.; Fujii, Shintaro; Kiguchi, Manabu

    The link between changes in the material crystal structure and its mechanical, electronic, magnetic, and optical functionalities known as the structure-property relationship is the cornerstone of the contemporary materials science research. The recent advances in scanning transmission electron and scanning probe microscopies (STEM and SPM) have opened an unprecedented path towards examining the materials structure property relationships on the single-impurity and atomic-configuration levels. Lacking, however, are the statistics-based approaches for cross-correlation of structure and property variables obtained in different information channels of the STEM and SPM experiments. Here we have designed an approach based on a combination of sliding windowmore » Fast Fourier Transform, Pearson correlation matrix, linear and kernel canonical correlation, to study a relationship between lattice distortions and electron scattering from the SPM data on graphene with defects. Our analysis revealed that the strength of coupling to strain is altered between different scattering channels which can explain coexistence of several quasiparticle interference patterns in the nanoscale regions of interest. In addition, the application of the kernel functions allowed us extracting a non-linear component of the relationship between the lattice strain and scattering intensity in graphene. Lastly, the outlined approach can be further utilized to analyzing correlations in various multi-modal imaging techniques where the information of interest is spatially distributed and has usually a complex multidimensional nature.« less

  13. Data mining graphene: Correlative analysis of structure and electronic degrees of freedom in graphenic monolayers with defects

    DOE PAGES

    Ziatdinov, Maxim A.; Fujii, Shintaro; Kiguchi, Manabu; ...

    2016-11-09

    The link between changes in the material crystal structure and its mechanical, electronic, magnetic, and optical functionalities known as the structure-property relationship is the cornerstone of the contemporary materials science research. The recent advances in scanning transmission electron and scanning probe microscopies (STEM and SPM) have opened an unprecedented path towards examining the materials structure property relationships on the single-impurity and atomic-configuration levels. Lacking, however, are the statistics-based approaches for cross-correlation of structure and property variables obtained in different information channels of the STEM and SPM experiments. Here we have designed an approach based on a combination of sliding windowmore » Fast Fourier Transform, Pearson correlation matrix, linear and kernel canonical correlation, to study a relationship between lattice distortions and electron scattering from the SPM data on graphene with defects. Our analysis revealed that the strength of coupling to strain is altered between different scattering channels which can explain coexistence of several quasiparticle interference patterns in the nanoscale regions of interest. In addition, the application of the kernel functions allowed us extracting a non-linear component of the relationship between the lattice strain and scattering intensity in graphene. Lastly, the outlined approach can be further utilized to analyzing correlations in various multi-modal imaging techniques where the information of interest is spatially distributed and has usually a complex multidimensional nature.« less

  14. Scattering and Absorption Properties of Polydisperse Wavelength-sized Particles Covered with Much Smaller Grains

    NASA Technical Reports Server (NTRS)

    Dlugach, Jana M.; Mishchenko, Michael I.; Mackowski, Daniel W.

    2012-01-01

    Using the results of direct, numerically exact computer solutions of the Maxwell equations, we analyze scattering and absorption characteristics of polydisperse compound particles in the form of wavelength-sized spheres covered with a large number of much smaller spherical grains.The results pertain to the complex refractive indices1.55 + i0.0003,1.55 + i0.3, and 3 + i0.1. We show that the optical effects of dusting wavelength-sized hosts by microscopic grains can vary depending on the number and size of the grains as well as on the complex refractive index. Our computations also demonstrate the high efficiency of the new superposition T-matrix code developed for use on distributed memory computer clusters.

  15. On nonsingular potentials of Cox-Thompson inversion scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmai, Tamas; Apagyi, Barnabas

    2010-02-15

    We establish a condition for obtaining nonsingular potentials using the Cox-Thompson inverse scattering method with one phase shift. The anomalous singularities of the potentials are avoided by maintaining unique solutions of the underlying Regge-Newton integral equation for the transformation kernel. As a by-product, new inequality sequences of zeros of Bessel functions are discovered.

  16. Dependent scattering and absorption by densely packed discrete spherical particles: Effects of complex refractive index

    NASA Astrophysics Data System (ADS)

    Ma, L. X.; Tan, J. Y.; Zhao, J. M.; Wang, F. Q.; Wang, C. A.; Wang, Y. Y.

    2017-07-01

    Due to the dependent scattering and absorption effects, the radiative transfer equation (RTE) may not be suitable for dealing with radiative transfer in dense discrete random media. This paper continues previous research on multiple and dependent scattering in densely packed discrete particle systems, and puts emphasis on the effects of particle complex refractive index. The Mueller matrix elements of the scattering system with different complex refractive indexes are obtained by both electromagnetic method and radiative transfer method. The Maxwell equations are directly solved based on the superposition T-matrix method, while the RTE is solved by the Monte Carlo method combined with the hard sphere model in the Percus-Yevick approximation (HSPYA) to consider the dependent scattering effects. The results show that for densely packed discrete random media composed of medium size parameter particles (equals 6.964 in this study), the demarcation line between independent and dependent scattering has remarkable connections with the particle complex refractive index. With the particle volume fraction increase to a certain value, densely packed discrete particles with higher refractive index contrasts between the particles and host medium and higher particle absorption indexes are more likely to show stronger dependent characteristics. Due to the failure of the extended Rayleigh-Debye scattering condition, the HSPYA has weak effect on the dependent scattering correction at large phase shift parameters.

  17. Bridging Three Orders of Magnitude: Multiple Scattered Waves Sense Fractal Microscopic Structures via Dispersion

    NASA Astrophysics Data System (ADS)

    Lambert, Simon A.; Näsholm, Sven Peter; Nordsletten, David; Michler, Christian; Juge, Lauriane; Serfaty, Jean-Michel; Bilston, Lynne; Guzina, Bojan; Holm, Sverre; Sinkus, Ralph

    2015-08-01

    Wave scattering provides profound insight into the structure of matter. Typically, the ability to sense microstructure is determined by the ratio of scatterer size to probing wavelength. Here, we address the question of whether macroscopic waves can report back the presence and distribution of microscopic scatterers despite several orders of magnitude difference in scale between wavelength and scatterer size. In our analysis, monosized hard scatterers 5 μ m in radius are immersed in lossless gelatin phantoms to investigate the effect of multiple reflections on the propagation of shear waves with millimeter wavelength. Steady-state monochromatic waves are imaged in situ via magnetic resonance imaging, enabling quantification of the phase velocity at a voxel size big enough to contain thousands of individual scatterers, but small enough to resolve the wavelength. We show in theory, experiments, and simulations that the resulting coherent superposition of multiple reflections gives rise to power-law dispersion at the macroscopic scale if the scatterer distribution exhibits apparent fractality over an effective length scale that is comparable to the probing wavelength. Since apparent fractality is naturally present in any random medium, microstructure can thereby leave its fingerprint on the macroscopically quantifiable power-law exponent. Our results are generic to wave phenomena and carry great potential for sensing microstructure that exhibits intrinsic fractality, such as, for instance, vasculature.

  18. Nonclassical-light generation in a photonic-band-gap nonlinear planar waveguide

    NASA Astrophysics Data System (ADS)

    Peřina, Jan, Jr.; Sibilia, Concita; Tricca, Daniela; Bertolotti, Mario

    2004-10-01

    The optical parametric process occurring in a photonic-band-gap planar waveguide is studied from the point of view of nonclassical-light generation. The nonlinearly interacting optical fields are described by the generalized superposition of coherent signals and noise using the method of operator linear corrections to a classical strong solution. Scattered backward-propagating fields are taken into account. Squeezed light as well as light with sub-Poissonian statistics can be obtained in two-mode fields under the specified conditions.

  19. Spatial frequency performance limitations of radiation dose optimization and beam positioning

    NASA Astrophysics Data System (ADS)

    Stewart, James M. P.; Stapleton, Shawn; Chaudary, Naz; Lindsay, Patricia E.; Jaffray, David A.

    2018-06-01

    The flexibility and sophistication of modern radiotherapy treatment planning and delivery methods have advanced techniques to improve the therapeutic ratio. Contemporary dose optimization and calculation algorithms facilitate radiotherapy plans which closely conform the three-dimensional dose distribution to the target, with beam shaping devices and image guided field targeting ensuring the fidelity and accuracy of treatment delivery. Ultimately, dose distribution conformity is limited by the maximum deliverable dose gradient; shallow dose gradients challenge techniques to deliver a tumoricidal radiation dose while minimizing dose to surrounding tissue. In this work, this ‘dose delivery resolution’ observation is rigorously formalized for a general dose delivery model based on the superposition of dose kernel primitives. It is proven that the spatial resolution of a delivered dose is bounded by the spatial frequency content of the underlying dose kernel, which in turn defines a lower bound in the minimization of a dose optimization objective function. In addition, it is shown that this optimization is penalized by a dose deposition strategy which enforces a constant relative phase (or constant spacing) between individual radiation beams. These results are further refined to provide a direct, analytic method to estimate the dose distribution arising from the minimization of such an optimization function. The efficacy of the overall framework is demonstrated on an image guided small animal microirradiator for a set of two-dimensional hypoxia guided dose prescriptions.

  20. Certification of windshear performance with RTCA class D radomes

    NASA Technical Reports Server (NTRS)

    Mathews, Bruce D.; Miller, Fran; Rittenhouse, Kirk; Barnett, Lee; Rowe, William

    1994-01-01

    Superposition testing of detection range performance forms a digital signal for input into a simulation of signal and data processing equipment and algorithms to be employed in a sensor system for advanced warning of hazardous windshear. For suitable pulse-Doppler radar, recording of the digital data at the input to the digital signal processor furnishes a realistic operational scenario and environmentally responsive clutter signal including all sidelobe clutter, ground moving target indications (GMTI), and large signal spurious due to mainbeam clutter and/or RFI respective of the urban airport clutter and aircraft scenarios (approach and landing antenna pointing). For linear radar system processes, a signal at the same point in the process from a hazard phenomena may be calculated from models of the scattering phenomena, for example, as represented in fine 3 dimensional reflectivity and velocity grid structures. Superposition testing furnishes a competing signal environment for detection and warning time performance confirmation of phenomena uncontrollable in a natural environment.

  1. Radiation force of an arbitrary acoustic beam on an elastic sphere in a fluid

    PubMed Central

    Sapozhnikov, Oleg A.; Bailey, Michael R.

    2013-01-01

    A theoretical approach is developed to calculate the radiation force of an arbitrary acoustic beam on an elastic sphere in a liquid or gas medium. First, the incident beam is described as a sum of plane waves by employing conventional angular spectrum decomposition. Then, the classical solution for the scattering of a plane wave from an elastic sphere is applied for each plane-wave component of the incident field. The net scattered field is expressed as a superposition of the scattered fields from all angular spectrum components of the incident beam. With this formulation, the incident and scattered waves are superposed in the far field to derive expressions for components of the radiation stress tensor. These expressions are then integrated over a spherical surface to analytically describe the radiation force on an elastic sphere. Limiting cases for particular types of incident beams are presented and are shown to agree with known results. Finally, the analytical expressions are used to calculate radiation forces associated with two specific focusing transducers. PMID:23363086

  2. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  3. High-speed mixture fraction and temperature imaging of pulsed, turbulent fuel jets auto-igniting in high-temperature, vitiated co-flows

    NASA Astrophysics Data System (ADS)

    Papageorge, Michael J.; Arndt, Christoph; Fuest, Frederik; Meier, Wolfgang; Sutton, Jeffrey A.

    2014-07-01

    In this manuscript, we describe an experimental approach to simultaneously measure high-speed image sequences of the mixture fraction and temperature fields during pulsed, turbulent fuel injection into a high-temperature, co-flowing, and vitiated oxidizer stream. The quantitative mixture fraction and temperature measurements are determined from 10-kHz-rate planar Rayleigh scattering and a robust data processing methodology which is accurate from fuel injection to the onset of auto-ignition. In addition, the data processing is shown to yield accurate temperature measurements following ignition to observe the initial evolution of the "burning" temperature field. High-speed OH* chemiluminescence (CL) was used to determine the spatial location of the initial auto-ignition kernel. In order to ensure that the ignition kernel formed inside of the Rayleigh scattering laser light sheet, OH* CL was observed in two viewing planes, one near-parallel to the laser sheet and one perpendicular to the laser sheet. The high-speed laser measurements are enabled through the use of the unique high-energy pulse burst laser system which generates long-duration bursts of ultra-high pulse energies at 532 nm (>1 J) suitable for planar Rayleigh scattering imaging. A particular focus of this study was to characterize the fidelity of the measurements both in the context of the precision and accuracy, which includes facility operating and boundary conditions and measurement of signal-to-noise ratio (SNR). The mixture fraction and temperature fields deduced from the high-speed planar Rayleigh scattering measurements exhibited SNR values greater than 100 at temperatures exceeding 1,300 K. The accuracy of the measurements was determined by comparing the current mixture fraction results to that of "cold", isothermal, non-reacting jets. All profiles, when properly normalized, exhibited self-similarity and collapsed upon one another. Finally, example mixture fraction, temperature, and OH* emission sequences are presented for a variety for fuel and vitiated oxidizer combinations. For all cases considered, auto-ignition occurred at the periphery of the fuel jet, under very "lean" conditions, where the local mixture fraction was less than the stoichiometric mixture fraction ( ξ < ξ s). Furthermore, the ignition kernel formed in regions of low scalar dissipation rate, which agrees with previous results from direct numerical simulations.

  4. Born scattering and inversion sensitivities in viscoelastic transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Moradi, Shahpoor; Innanen, Kristopher A.

    2017-11-01

    We analyse the scattering of seismic waves from anisotropic-viscoelastic inclusions using the Born approximation. We consider the specific case of Vertical Transverse Isotropic (VTI) media with low-loss attenuation and weak anisotropy such that second- and higher-order contributions from quality factors and Thomsen parameters are negligible. To accommodate the volume scattering approach, the viscoelastic VTI media is broken into a homogeneous viscoelastic reference medium with distributed inclusions in both viscoelastic and anisotropic properties. In viscoelastic reference media in which all propagations take place, wave modes are of P-wave type, SI-wave type and SII-wave type, all with complex slowness and polarization vectors. We generate expressions for P-to-P, P-to-SI, SI-to-SI and SII-to-SII scattering potentials, and demonstrate that they reduce to previously derived isotropic results. These scattering potential expressions are sensitivity kernels related to the Fréchet derivatives which provide the weights for multiparameter full waveform inversion updates.

  5. Regional teleseismic body-wave tomography with component-differential finite-frequency sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Shen, Y.; Chen, Y. J.

    2015-12-01

    By using ray theory in conjunction with the Born approximation, Dahlen et al. [2000] computed 3-D sensitivity kernels for finite-frequency seismic traveltimes. A series of studies have been conducted based on this theory to model the mantle velocity structure [e.g., Hung et al., 2004; Montelli et al., 2004; Ren and Shen, 2008; Yang et al., 2009; Liang et al., 2011; Tang et al., 2014]. One of the simplifications in the calculation of the kernels is the paraxial assumption, which may not be strictly valid near the receiver, the region of interest in regional teleseismic tomography. In this study, we improve the accuracy of traveltime sensitivity kernels of the first P arrival by eliminating the paraxial approximation. For calculation efficiency, the traveltime table built by the Fast Marching Method (FMM) is used to calculate both the wave vector and the geometrical spreading at every grid in the whole volume. The improved kernels maintain the sign, but with different amplitudes at different locations. We also find that when the directivity of the scattered wave is being taken into consideration, the differential sensitivity kernel of traveltimes measured at the vertical and radial component of the same receiver concentrates beneath the receiver, which can be used to invert for the structure inside the Earth. Compared with conventional teleseismic tomography, which uses the differential traveltimes between two stations in an array, this method is not affected by instrument response and timing errors, and reduces the uncertainty caused by the finite dimension of the model in regional tomography. In addition, the cross-dependence of P traveltimes to S-wave velocity anomaly is significant and sensitive to the structure beneath the receiver. So with the component-differential finite-frequency sensitivity kernel, the anomaly of both P-wave and S-wave velocity and Vp/Vs ratio can be achieved at the same time.

  6. Including Delbrück scattering in GEANT4

    NASA Astrophysics Data System (ADS)

    Omer, Mohamed; Hajima, Ryoichi

    2017-08-01

    Elastic scattering of γ-rays is a significant interaction among γ-ray interactions with matter. Therefore, the planning of experiments involving measurements of γ-rays using Monte Carlo simulations usually includes elastic scattering. However, current simulation tools do not provide a complete picture of elastic scattering. The majority of these tools assume Rayleigh scattering is the primary contributor to elastic scattering and neglect other elastic scattering processes, such as nuclear Thomson and Delbrück scattering. Here, we develop a tabulation-based method to simulate elastic scattering in one of the most common open-source Monte Carlo simulation toolkits, GEANT4. We collectively include three processes, Rayleigh scattering, nuclear Thomson scattering, and Delbrück scattering. Our simulation more appropriately uses differential cross sections based on the second-order scattering matrix instead of current data, which are based on the form factor approximation. Moreover, the superposition of these processes is carefully taken into account emphasizing the complex nature of the scattering amplitudes. The simulation covers an energy range of 0.01 MeV ≤ E ≤ 3 MeV and all elements with atomic numbers of 1 ≤ Z ≤ 99. In addition, we validated our simulation by comparing the differential cross sections measured in earlier experiments with those extracted from the simulations. We find that the simulations are in good agreement with the experimental measurements. Differences between the experiments and the simulations are 21% for uranium, 24% for lead, 3% for tantalum, and 8% for cerium at 2.754 MeV. Coulomb corrections to the Delbrück amplitudes may account for the relatively large differences that appear at higher Z values.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haugen, Carl C.; Forget, Benoit; Smith, Kord S.

    Most high performance computing systems being deployed currently and envisioned for the future are based on making use of heavy parallelism across many computational nodes and many concurrent cores. These types of heavily parallel systems often have relatively little memory per core but large amounts of computing capability. This places a significant constraint on how data storage is handled in many Monte Carlo codes. This is made even more significant in fully coupled multiphysics simulations, which requires simulations of many physical phenomena be carried out concurrently on individual processing nodes, which further reduces the amount of memory available for storagemore » of Monte Carlo data. As such, there has been a move towards on-the-fly nuclear data generation to reduce memory requirements associated with interpolation between pre-generated large nuclear data tables for a selection of system temperatures. Methods have been previously developed and implemented in MIT’s OpenMC Monte Carlo code for both the resolved resonance regime and the unresolved resonance regime, but are currently absent for the thermal energy regime. While there are many components involved in generating a thermal neutron scattering cross section on-the-fly, this work will focus on a proposed method for determining the energy and direction of a neutron after a thermal incoherent inelastic scattering event. This work proposes a rejection sampling based method using the thermal scattering kernel to determine the correct outgoing energy and angle. The goal of this project is to be able to treat the full S (a, ß) kernel for graphite, to assist in high fidelity simulations of the TREAT reactor at Idaho National Laboratory. The method is, however, sufficiently general to be applicable in other thermal scattering materials, and can be initially validated with the continuous analytic free gas model.« less

  8. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  9. Retrieval of the aerosol size distribution in the complex anomalous diffraction approximation

    NASA Astrophysics Data System (ADS)

    Franssens, Ghislain R.

    This contribution reports some recently achieved results in aerosol size distribution retrieval in the complex anomalous diffraction approximation (ADA) to MIE scattering theory. This approximation is valid for spherical particles that are large compared to the wavelength and have a refractive index close to 1. The ADA kernel is compared with the exact MIE kernel. Despite being a simple approximation, the ADA seems to have some practical value for the retrieval of the larger modes of tropospheric and lower stratospheric aerosols. The ADA has the advantage over MIE theory that an analytic inversion of the associated Fredholm integral equation becomes possible. In addition, spectral inversion in the ADA can be formulated as a well-posed problem. In this way, a new inverse formula was obtained, which allows the direct computation of the size distribution as an integral over the spectral extinction function. This formula is valid for particles that both scatter and absorb light and it also takes the spectral dispersion of the refractive index into account. Some details of the numerical implementation of the inverse formula are illustrated using a modified gamma test distribution. Special attention is given to the integration of spectrally truncated discrete extinction data with errors.

  10. Using a laser source to measure the refractive index of glass beads and Debye theory analysis.

    PubMed

    Li, Shui-Yan; Qin, Shuang; Li, Da-Hai; Wang, Qiong-Hua

    2015-11-20

    Using a monochromatic laser beam to illuminate a homogeneous glass bead, some rainbows will appear around it. This paper concentrates on the study of the scattering intensity distribution and the method of measuring the refractive index for glass beads based on the Debye theory. It is found that the first rainbow due to the scattering superposition of backward light of the low-refractive-index glass beads can be explained approximately with the diffraction, the external reflection plus the one internal reflection, while the second rainbow of high-refractive-index glass beads is due to the contribution from the diffraction, the external reflection, the direct transmission, and the two internal reflections. The scattering intensity distribution is affected by the refractive index, the radius of the glass bead, and the incident beam width. The effects of the refractive index and the glass bead size on the first and second minimum deviation angle position are analyzed in this paper. The results of the measurements agree very well with the specifications.

  11. Optics of Water Microdroplets with Soot Inclusions: Exact Versus Approximate Results

    NASA Technical Reports Server (NTRS)

    Liu, Li; Mishchenko, Michael I.

    2016-01-01

    We use the recently generalized version of the multi-sphere superposition T-matrix method (STMM) to compute the scattering and absorption properties of microscopic water droplets contaminated by black carbon. The soot material is assumed to be randomly distributed throughout the droplet interior in the form of numerous small spherical inclusions. Our numerically-exact STMM results are compared with approximate ones obtained using the Maxwell-Garnett effective-medium approximation (MGA) and the Monte Carlo ray-tracing approximation (MCRTA). We show that the popular MGA can be used to calculate the droplet optical cross sections, single-scattering albedo, and asymmetry parameter provided that the soot inclusions are quasi-uniformly distributed throughout the droplet interior, but can fail in computations of the elements of the scattering matrix depending on the volume fraction of soot inclusions. The integral radiative characteristics computed with the MCRTA can deviate more significantly from their exact STMM counterparts, while accurate MCRTA computations of the phase function require droplet size parameters substantially exceeding 60.

  12. Use of the reciprocity theorem for a closed form solution of scattering of the lowest axially symmetric torsional wave mode by a defect in a pipe.

    PubMed

    Lee, Jaesun; Achenbach, Jan D; Cho, Younho

    2018-03-01

    Guided waves can effectively be used for inspection of large scale structures. Surface corrosion is often found as major defect type in large scale structures such as pipelines. Guided wave interaction with surface corrosion can provide useful information for sizing and classification. In this paper, the elastodynamic reciprocity theorem is used to formulate and solve complicated scattering problems in a simple manner. The approach has already been applied to scattering of Rayleigh and Lamb waves by defects to produce closed form solutions of amplitude of scattered waves. In this paper, the scattering of the lowest axially symmetric torsional mode, which is widely used in commercial applications, is analyzed by the reciprocity theorem. In the present paper, the theorem is used to determine the scattering of the lowest torsional mode by a tapered defect that was earlier considered experimentally and numerically by the finite element method. It is shown that by the presented method it is simple to obtain the ratio of amplitudes of scattered torsional modes for a tapered notch. The results show a good agreement with earlier numerical results. The wave field superposition technique in conjunction with the reciprocity theorem simplifies the solution of the scattering problem to yield a closed form solution which can play a significant role in quantitative signal interpretation. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Modified kernel-based nonlinear feature extraction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, J.; Perkins, S. J.; Theiler, J. P.

    2002-01-01

    Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determinedmore » by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.« less

  14. Multi-PSF fusion in image restoration of range-gated systems

    NASA Astrophysics Data System (ADS)

    Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui

    2018-07-01

    For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.

  15. Quark model with chiral-symmetry breaking and confinement in the Covariant Spectator Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biernat, Elmer P.; Pena, Maria Teresa; Ribiero, Jose' Emilio F.

    2016-03-01

    We propose a model for the quark-antiquark interaction in Minkowski space using the Covariant Spectator Theory. We show that with an equal-weighted scalar-pseudoscalar structure for the confining part of our interaction kernel the axial-vector Ward-Takahashi identity is preserved and our model complies with the Adler-zero constraint for pi-pi-scattering imposed by chiral symmetry.

  16. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  17. Multiple Scattering in Planetary Regoliths Using Incoherent Interactions

    NASA Astrophysics Data System (ADS)

    Muinonen, K.; Markkanen, J.; Vaisanen, T.; Penttilä, A.

    2017-12-01

    We consider scattering of light by a planetary regolith using novel numerical methods for discrete random media of particles. Understanding the scattering process is of key importance for spectroscopic, photometric, and polarimetric modeling of airless planetary objects, including radar studies. In our modeling, the size of the spherical random medium can range from microscopic to macroscopic sizes, whereas the particles are assumed to be of the order of the wavelength in size. We extend the radiative transfer and coherent backscattering method (RT-CB) to the case of dense packing of particles by adopting the ensemble-averaged first-order incoherent extinction, scattering, and absorption characteristics of a volume element of particles as input. In the radiative transfer part, at each absorption and scattering process, we account for absorption with the help of the single-scattering albedo and peel off the Stokes parameters of radiation emerging from the medium in predefined scattering angles. We then generate a new scattering direction using the joint probability density for the local polar and azimuthal scattering angles. In the coherent backscattering part, we utilize amplitude scattering matrices along the radiative-transfer path and the reciprocal path. Furthermore, we replace the far-field interactions of the RT-CB method with rigorous interactions facilitated by the Superposition T-matrix method (STMM). This gives rise to a new RT-RT method, radiative transfer with reciprocal interactions. For microscopic random media, we then compare the new results to asymptotically exact results computed using the STMM, succeeding in the numerical validation of the new methods.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.

  18. The single scattering properties of the aerosol particles as aggregated spheres

    NASA Astrophysics Data System (ADS)

    Wu, Y.; Gu, X.; Cheng, T.; Xie, D.; Yu, T.; Chen, H.; Guo, J.

    2012-08-01

    The light scattering and absorption properties of anthropogenic aerosol particles such as soot aggregates are complicated in the temporal and spatial distribution, which introduce uncertainty of radiative forcing on global climate change. In order to study the single scattering properties of anthorpogenic aerosol particles, the structures of these aerosols such as soot paticles and soot-containing mixtures with the sulfate or organic matter, are simulated using the parallel diffusion limited aggregation algorithm (DLA) based on the transmission electron microscope images (TEM). Then, the single scattering properties of randomly oriented aerosols, such as scattering matrix, single scattering albedo (SSA), and asymmetry parameter (AP), are computed using the superposition T-matrix method. The comparisons of the single scattering properties of these specific types of clusters with different morphological and chemical factors such as fractal parameters, aspect ratio, monomer radius, mixture mode and refractive index, indicate that these different impact factors can respectively generate the significant influences on the single scattering properties of these aerosols. The results show that aspect ratio of circumscribed shape has relatively small effect on single scattering properties, for both differences of SSA and AP are less than 0.1. However, mixture modes of soot clusters with larger sulfate particles have remarkably important effects on the scattering and absorption properties of aggregated spheres, and SSA of those soot-containing mixtures are increased in proportion to the ratio of larger weakly absorbing attachments. Therefore, these complex aerosols come from man made pollution cannot be neglected in the aerosol retrievals. The study of the single scattering properties on these kinds of aggregated spheres is important and helpful in remote sensing observations and atmospheric radiation balance computations.

  19. Harmonic effects on ion-bulk waves and simulation of stimulated ion-bulk-wave scattering in CH plasmas

    NASA Astrophysics Data System (ADS)

    Feng, Q. S.; Zheng, C. Y.; Liu, Z. J.; Cao, L. H.; Xiao, C. Z.; Wang, Q.; Zhang, H. C.; He, X. T.

    2017-08-01

    Ion-bulk (IBk) wave, a novel branch with a phase velocity close to the ion’s thermal velocity, discovered by Valentini et al (2011 Plasma Phys. Control. Fusion 53 105017), is recently considered as an important electrostatic activity in solar wind, and thus of great interest to space physics and also inertial confinement fusion. The harmonic effects on IBk waves has been researched by Vlasov simulation for the first time. The condition of excitation of the large-amplitude IBk waves is given. The nature of nonlinear IBk waves in the condition of k< {k}{{lor}}/2 (k lor is the wave number at loss-of-resonance point) is undamped Bernstein-Greene-Kruskal-like waves with harmonic superposition. Only when the wave number k of IBk waves satisfies {k}{{lor}}/2≲ k≤slant {k}{{lor}}, can a large-amplitude and mono-frequency IBk wave be excited. A novel stimulated scattering from IBk modes called stimulated ion-bulk-wave scattering (SIBS) or stimulated Feng scattering (SFS) has been proposed and also verified by Vlasov-Maxwell code. In CH plasmas, in addition to the stimulated Brillouin scattering from multi ion-acoustic waves, there exists SIBS simultaneously. This research gives an insight into the SIBS in the field of laser plasma interaction.

  20. MO-FG-CAMPUS-TeP1-05: Rapid and Efficient 3D Dosimetry for End-To-End Patient-Specific QA of Rotational SBRT Deliveries Using a High-Resolution EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Han, B; Xing, L

    2016-06-15

    Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less

  1. A deterministic partial differential equation model for dose calculation in electron radiotherapy.

    PubMed

    Duclous, R; Dubroca, B; Frank, M

    2010-07-07

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g.Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of delta electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  2. A deterministic partial differential equation model for dose calculation in electron radiotherapy

    NASA Astrophysics Data System (ADS)

    Duclous, R.; Dubroca, B.; Frank, M.

    2010-07-01

    High-energy ionizing radiation is a prominent modality for the treatment of many cancers. The approaches to electron dose calculation can be categorized into semi-empirical models (e.g. Fermi-Eyges, convolution-superposition) and probabilistic methods (e.g. Monte Carlo). A third approach to dose calculation has only recently attracted attention in the medical physics community. This approach is based on the deterministic kinetic equations of radiative transfer. We derive a macroscopic partial differential equation model for electron transport in tissue. This model involves an angular closure in the phase space. It is exact for the free streaming and the isotropic regime. We solve it numerically by a newly developed HLLC scheme based on Berthon et al (2007 J. Sci. Comput. 31 347-89) that exactly preserves the key properties of the analytical solution on the discrete level. We discuss several test cases taken from the medical physics literature. A test case with an academic Henyey-Greenstein scattering kernel is considered. We compare our model to a benchmark discrete ordinate solution. A simplified model of electron interactions with tissue is employed to compute the dose of an electron beam in a water phantom, and a case of irradiation of the vertebral column. Here our model is compared to the PENELOPE Monte Carlo code. In the academic example, the fluences computed with the new model and a benchmark result differ by less than 1%. The depths at half maximum differ by less than 0.6%. In the two comparisons with Monte Carlo, our model gives qualitatively reasonable dose distributions. Due to the crude interaction model, these so far do not have the accuracy needed in clinical practice. However, the new model has a computational cost that is less than one-tenth of the cost of a Monte Carlo simulation. In addition, simulations can be set up in a similar way as a Monte Carlo simulation. If more detailed effects such as coupled electron-photon transport, bremsstrahlung, Compton scattering and the production of δ electrons are added to our model, the computation time will only slightly increase. Its margin of error, on the other hand, will decrease and should be within a few per cent of the actual dose. Therefore, the new model has the potential to become useful for dose calculations in clinical practice.

  3. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  4. Radiative-Transfer Modeling of Spectra of Densely Packed Particulate Media

    NASA Astrophysics Data System (ADS)

    Ito, G.; Mishchenko, M. I.; Glotch, T. D.

    2017-12-01

    Remote sensing measurements over a wide range of wavelengths from both ground- and space-based platforms have provided a wealth of data regarding the surfaces and atmospheres of various solar system bodies. With proper interpretations, important properties, such as composition and particle size, can be inferred. However, proper interpretation of such datasets can often be difficult, especially for densely packed particulate media with particle sizes on the order of wavelength of light being used for remote sensing. Radiative transfer theory has often been applied to the study of densely packed particulate media like planetary regoliths and snow, but with difficulty, and here we continue to investigate radiative transfer modeling of spectra of densely packed media. We use the superposition T-matrix method to compute scattering properties of clusters of particles and capture the near-field effects important for dense packing. Then, the scattering parameters from the T-matrix computations are modified with the static structure factor correction, accounting for the dense packing of the clusters themselves. Using these corrected scattering parameters, reflectance (or emissivity via Kirchhoff's Law) is computed with the method of invariance imbedding solution to the radiative transfer equation. For this work we modeled the emissivity spectrum of the 3.3 µm particle size fraction of enstatite, representing some common mineralogical and particle size components of regoliths, in the mid-infrared wavelengths (5 - 50 µm). The modeled spectrum from the T-matrix method with static structure factor correction using moderate packing densities (filling factors of 0.1 - 0.2) produced better fits to the laboratory measurement of corresponding spectrum than the spectrum modeled by the equivalent method without static structure factor correction. Future work will test the method of the superposition T-matrix and static structure factor correction combination for larger particles sizes and polydispersed clusters in search for the most effective modeling of spectra of densely packed particulate media.

  5. Scattering and radiative properties of complex soot and soot-containing particles

    NASA Astrophysics Data System (ADS)

    Liu, L.; Mishchenko, M. I.; Mackowski, D. W.; Dlugach, J.

    2012-12-01

    Tropospheric soot and soot containing aerosols often exhibit nonspherical overall shapes and complex morphologies. They can externally, semi-externally, and internally mix with other aerosol species. This poses a tremendous challenge in particle characterization, remote sensing, and global climate modeling studies. To address these challenges, we used the new numerically exact public-domain Fortran-90 code based on the superposition T-matrix method (STMM) and other theoretical models to analyze the potential effects of aggregation and heterogeneity on light scattering and absorption by morphologically complex soot containing particles. The parameters we computed include the whole scattering matrix elements, linear depolarization ratios, optical cross-sections, asymmetry parameters, and single scattering albedos. It is shown that the optical characteristics of soot and soot containing aerosols very much depend on particle sizes, compositions, and aerosol overall shapes. The soot particle configurations and heterogeneities can have a substantial effect that can result in a significant enhancement of extinction and absorption relative to those computed from the Lorenz-Mie theory. Meanwhile the model calculated information combined with in-situ and remote sensed data can be used to constrain soot particle shapes and sizes which are much needed in climate models.

  6. Using Monte Carlo Ray tracing to Understand the Vibrational Response of UN as Measured by Neutron Spectroscopy

    NASA Astrophysics Data System (ADS)

    Lin, J. Y. Y.; Aczel, A. A.; Abernathy, D. L.; Nagler, S. E.; Buyers, W. J. L.; Granroth, G. E.

    2014-03-01

    Recently neutron spectroscopy measurements, using the ARCS and SEQUOIA time-of-flight chopper spectrometers, observed an extended series of equally spaced modes in UN that are well described by quantum harmonic oscillator behavior of the N atoms. Additional contributions to the scattering are also observed. Monte Carlo ray tracing simulations with various sample kernels have allowed us to distinguish between the response from the N oscillator scattering, contributions that arise from the U partial phonon density of states (PDOS), and all forms of multiple scattering. These simulations confirm that multiple scattering contributes an ~ Q -independent background to the spectrum at the oscillator mode positions. All three of the aforementioned contributions are necessary to accurately model the experimental data. These simulations were also used to compare the T dependence of the oscillator modes in SEQUOIA data to that predicted by the binary solid model. This work was sponsored by the Scientific User Facilities Division, Office of Basic Energy Sciences, U.S. Department of Energy.

  7. Tracking diffusion of conditioning water in single wheat kernels of different hardnesses by near infrared hyperspectral imaging.

    PubMed

    Manley, Marena; du Toit, Gerida; Geladi, Paul

    2011-02-07

    The combination of near infrared (NIR) hyperspectral imaging and chemometrics was used to follow the diffusion of conditioning water over time in wheat kernels of different hardnesses. Conditioning was attempted with deionised water (dH(2)O) and deuterium oxide (D(2)O). The images were recorded at different conditioning times (0-36 h) from 1000 to 2498 nm with a line scan imaging system. After multivariate cleaning and spectral pre-processing (either multiplicative scatter correction or standard normal variate and Savitzky-Golay smoothing) six principal components (PCs) were calculated. These were studied visually interactively as score images and score plots. As no clear clusters were present in the score plots, changes in the score plots were investigated by means of classification gradients made within the respective PCs. Classes were selected in the direction of a PC (from positive to negative or negative to positive score values) in almost equal segments. Subsequently loading line plots were used to provide a spectroscopic explanation of the classification gradients. It was shown that the first PC explained kernel curvature. PC3 was shown to be related to a moisture-starch contrast and could explain the progress of water uptake. The positive influence of protein was also observed. The behaviour of soft, hard and very hard kernels was different in this respect, with the uptake of water observed much earlier in the soft kernels than in the harder ones. The harder kernels also showed a stronger influence of protein in the loading line plots. Difference spectra showed interpretable changes over time for water but not for D(2)O which had a too low signal in the wavelength range used. NIR hyperspectral imaging together with exploratory chemometrics, as detailed in this paper, may have wider applications than merely conditioning studies. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. New numerical method for radiation heat transfer in nonhomogeneous participating media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howell, J.R.; Tan, Zhiqiang

    A new numerical method, which solves the exact integral equations of distance-angular integration form for radiation transfer, is introduced in this paper. By constructing and prestoring the numerical integral formulas for the distance integral for appropriate kernel functions, this method eliminates the time consuming evaluations of the kernels of the space integrals in the formal computations. In addition, when the number of elements in the system is large, the resulting coefficient matrix is quite sparse. Thus, either considerable time or much storage can be saved. A weakness of the method is discussed, and some remedies are suggested. As illustrations, somemore » one-dimensional and two-dimensional problems in both homogeneous and inhomogeneous emitting, absorbing, and linear anisotropic scattering media are studied. Some results are compared with available data. 13 refs.« less

  9. Superposition of nonparaxial vectorial complex-source spherically focused beams: Axial Poynting singularity and reverse propagation

    NASA Astrophysics Data System (ADS)

    Mitri, F. G.

    2016-08-01

    In this work, counterintuitive effects such as the generation of an axial (i.e., long the direction of wave motion) zero-energy flux density (i.e., axial Poynting singularity) and reverse (i.e., negative) propagation of nonparaxial quasi-Gaussian electromagnetic (EM) beams are examined. Generalized analytical expressions for the EM field's components of a coherent superposition of two high-order quasi-Gaussian vortex beams of opposite handedness and different amplitudes are derived based on the complex-source-point method, stemming from Maxwell's vector equations and the Lorenz gauge condition. The general solutions exhibiting unusual effects satisfy the Helmholtz and Maxwell's equations. The EM beam components are characterized by nonzero integer degree and order (n ,m ) , respectively, an arbitrary waist w0, a diffraction convergence length known as the Rayleigh range zR, and a weighting (real) factor 0 ≤α ≤1 that describes the transition of the beam from a purely vortex (α =0 ) to a nonvortex (α =1 ) type. An attractive feature for this superposition is the description of strongly focused (or strongly divergent) wave fields. Computations of the EM power density as well as the linear and angular momentum density fluxes illustrate the analysis with particular emphasis on the polarization states of the vector potentials forming the beams and the weight of the coherent beam superposition causing the transition from the vortex to the nonvortex type. Should some conditions determined by the polarization state of the vector potentials and the beam parameters be met, an axial zero-energy flux density is predicted in addition to a negative retrograde propagation effect. Moreover, rotation reversal of the angular momentum flux density with respect to the beam handedness is anticipated, suggesting the possible generation of negative (left-handed) torques. The results are particularly useful in applications involving the design of strongly focused optical laser tweezers, tractor beams, optical spanners, arbitrary scattering, radiation force, angular momentum, and torque in particle manipulation, and other related topics.

  10. Using Monte Carlo ray tracing simulations to model the quantum harmonic oscillator modes observed in uranium nitride

    NASA Astrophysics Data System (ADS)

    Lin, J. Y. Y.; Aczel, A. A.; Abernathy, D. L.; Nagler, S. E.; Buyers, W. J. L.; Granroth, G. E.

    2014-04-01

    Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of-flight chopper spectrometers [A. A. Aczel et al., Nat. Commun. 3, 1124 (2012), 10.1038/ncomms2117]. These modes are well described by three-dimensional isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accounting for nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states, and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature-dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T dependence of the scattering from these modes is strongly influenced by the uranium lattice.

  11. A semi-analytical method for near-trapped mode and fictitious frequencies of multiple scattering by an array of elliptical cylinders in water waves

    NASA Astrophysics Data System (ADS)

    Chen, Jeng-Tzong; Lee, Jia-Wei

    2013-09-01

    In this paper, we focus on the water wave scattering by an array of four elliptical cylinders. The null-field boundary integral equation method (BIEM) is used in conjunction with degenerate kernels and eigenfunctions expansion. The closed-form fundamental solution is expressed in terms of the degenerate kernel containing the Mathieu and the modified Mathieu functions in the elliptical coordinates. Boundary densities are represented by using the eigenfunction expansion. To avoid using the addition theorem to translate the Mathieu functions, the present approach can solve the water wave problem containing multiple elliptical cylinders in a semi-analytical manner by introducing the adaptive observer system. Regarding water wave problems, the phenomena of numerical instability of fictitious frequencies may appear when the BIEM/boundary element method (BEM) is used. Besides, the near-trapped mode for an array of four identical elliptical cylinders is observed in a special layout. Both physical (near-trapped mode) and mathematical (fictitious frequency) resonances simultaneously appear in the present paper for a water wave problem by an array of four identical elliptical cylinders. Two regularization techniques, the combined Helmholtz interior integral equation formulation (CHIEF) method and the Burton and Miller approach, are adopted to alleviate the numerical resonance due to fictitious frequency.

  12. Development, survival and fitness performance of Helicoverpa zea (Lepidoptera: Noctuidae) in MON810 Bt field corn.

    PubMed

    Horner, T A; Dively, G P; Herbert, D A

    2003-06-01

    Helicoverpa zea (Boddie) development, survival, and feeding injury in MON810 transgenic ears of field corn (Zea mays L.) expressing Bacillus thuringiensis variety kurstaki (Bt) Cry1Ab endotoxins were compared with non-Bt ears at four geographic locations over two growing seasons. Expression of Cry1Ab endotoxin resulted in overall reductions in the percentage of damaged ears by 33% and in the amount of kernels consumed by 60%. Bt-induced effects varied significantly among locations, partly because of the overall level and timing of H. zea infestations, condition of silk tissue at the time of egg hatch, and the possible effects of plant stress. Larvae feeding on Bt ears produced scattered, discontinuous patches of partially consumed kernels, which were arranged more linearly than the compact feeding patterns in non-Bt ears. The feeding patterns suggest that larvae in Bt ears are moving about sampling kernels more frequently than larvae in non-Bt ears. Because not all kernels express the same level of endotoxin, the spatial heterogeneity of toxin distribution within Bt ears may provide an opportunity for development of behavioral responses in H. zea to avoid toxin. MON810 corn suppressed the establishment and development of H. zea to late instars by at least 75%. This level of control is considered a moderate dose, which may increase the risk of resistance development in areas where MON810 corn is widely adopted and H. zea overwinters successfully. Sublethal effects of MON810 corn resulted in prolonged larval and prepupal development, smaller pupae, and reduced fecundity of H. zea. The moderate dose effects and the spatial heterogeneity of toxin distribution among kernels could increase the additive genetic variance for both physiological and behavioral resistance in H. zea populations. Implications of localized population suppression are discussed.

  13. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    NASA Astrophysics Data System (ADS)

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  14. Validation of Born Traveltime Kernels

    NASA Astrophysics Data System (ADS)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  15. Approaches to reducing photon dose calculation errors near metal implants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Jessie Y.; Followill, David S.; Howell, Reb

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well asmore » two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact reduction methods investigated, the authors found that O-MAR was the most consistent method, resulting in either improved dose calculation accuracy (dental case) or little impact on calculation accuracy (spine case). GSI was unsuccessful at reducing the severe artifacts caused by dental fillings and had very little impact on calculation accuracy. GSI with MARS on the other hand gave mixed results, sometimes introducing metal distortion and increasing calculation errors (titanium rectangular implant and titanium spinal hardware) but other times very successfully reducing artifacts (Cerrobend rectangular implant and dental fillings). Conclusions: Though successful at improving dose calculation accuracy upstream of metal implants, metal kernels were not found to substantially improve accuracy for clinical cases. Of the commercial artifact reduction methods investigated, O-MAR was found to be the most consistent candidate for all-purpose CT simulation imaging. The MARS algorithm for GSI should be used with caution for titanium implants, larger implants, and implants located near heterogeneities as it can distort the size and shape of implants and increase calculation errors.« less

  16. Small-scale modification to the lensing kernel

    NASA Astrophysics Data System (ADS)

    Hadzhiyska, Boryana; Spergel, David; Dunkley, Joanna

    2018-02-01

    Calculations of the cosmic microwave background (CMB) lensing power implemented into the standard cosmological codes such as camb and class usually treat the surface of last scatter as an infinitely thin screen. However, since the CMB anisotropies are smoothed out on scales smaller than the diffusion length due to the effect of Silk damping, the photons which carry information about the small-scale density distribution come from slightly earlier times than the standard recombination time. The dominant effect is the scale dependence of the mean redshift associated with the fluctuations during recombination. We find that fluctuations at k =0.01 Mpc-1 come from a characteristic redshift of z ≈1090 , while fluctuations at k =0.3 Mpc-1 come from a characteristic redshift of z ≈1130 . We then estimate the corrections to the lensing kernel and the related power spectra due to this effect. We conclude that neglecting it would result in a deviation from the true value of the lensing kernel at the half percent level at small CMB scales. For an all-sky, noise-free experiment, this corresponds to a ˜0.1 σ shift in the observed temperature power spectrum on small scales (2500 ≲l ≲4000 ).

  17. A dose assessment method for arbitrary geometries with virtual reality in the nuclear facilities decommissioning

    NASA Astrophysics Data System (ADS)

    Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu

    2018-03-01

    During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.

  18. Encoding Dissimilarity Data for Statistical Model Building.

    PubMed

    Wahba, Grace

    2010-12-01

    We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A "newbie" algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332-1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128-8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison.

  19. Body-wave traveltime and amplitude shifts from asymptotic travelling wave coupling

    USGS Publications Warehouse

    Pollitz, F.

    2006-01-01

    We explore the sensitivity of finite-frequency body-wave traveltimes and amplitudes to perturbations in 3-D seismic velocity structure relative to a spherically symmetric model. Using the approach of coupled travelling wave theory, we consider the effect of a structural perturbation on an isolated portion of the seismogram. By convolving the spectrum of the differential seismogram with the spectrum of a narrow window taper, and using a Taylor's series expansion for wavenumber as a function of frequency on a mode dispersion branch, we derive semi-analytic expressions for the sensitivity kernels. Far-field effects of wave interactions with the free surface or internal discontinuities are implicitly included, as are wave conversions upon scattering. The kernels may be computed rapidly for the purpose of structural inversions. We give examples of traveltime sensitivity kernels for regional wave propagation at 1 Hz. For the direct SV wave in a simple crustal velocity model, they are generally complicated because of interfering waves generated by interactions with the free surface and the Mohorovic??ic?? discontinuity. A large part of the interference effects may be eliminated by restricting the travelling wave basis set to those waves within a certain range of horizontal phase velocity. ?? Journal compilation ?? 2006 RAS.

  20. Effect of tropospheric aerosols upon atmospheric infrared cooling rates

    NASA Technical Reports Server (NTRS)

    Harshvardhan, MR.; Cess, R. D.

    1978-01-01

    The effect of tropospheric aerosols on atmospheric infrared cooling rates is investigated by the use of recent models of infrared gaseous absorption. A radiative model of the atmosphere that incorporates dust as an absorber and scatterer of infrared radiation is constructed by employing the exponential kernel approximation to the radiative transfer equation. Scattering effects are represented in terms of a single scattering albedo and an asymmetry factor. The model is applied to estimate the effect of an aerosol layer made of spherical quartz particles on the infrared cooling rate. Calculations performed for a reference wavelength of 0.55 microns show an increased greenhouse effect, where the net upward flux at the surface is reduced by 10% owing to the strongly enhanced downward emission. There is a substantial increase in the cooling rate near the surface, but the mean cooling rate throughout the lower troposphere was only 10%.

  1. Transverse momentum in double parton scattering: factorisation, evolution and matching

    NASA Astrophysics Data System (ADS)

    Buffing, Maarten G. A.; Diehl, Markus; Kasemets, Tomas

    2018-01-01

    We give a description of double parton scattering with measured transverse momenta in the final state, extending the formalism for factorisation and resummation developed by Collins, Soper and Sterman for the production of colourless particles. After a detailed analysis of their colour structure, we derive and solve evolution equations in rapidity and renormalisation scale for the relevant soft factors and double parton distributions. We show how in the perturbative regime, transverse momentum dependent double parton distributions can be expressed in terms of simpler nonperturbative quantities and compute several of the corresponding perturbative kernels at one-loop accuracy. We then show how the coherent sum of single and double parton scattering can be simplified for perturbatively large transverse momenta, and we discuss to which order resummation can be performed with presently available results. As an auxiliary result, we derive a simple form for the square root factor in the Collins construction of transverse momentum dependent parton distributions.

  2. Evaluation of neutron thermalization parameters and benchmark reactor calculations using a synthetic scattering function for molecular gases

    NASA Astrophysics Data System (ADS)

    Gillette, V. H.; Patiño, N. E.; Granada, J. R.; Mayer, R. E.

    1989-08-01

    Using a synthetic incoherent scattering function which describes the interaction of neutrons with molecular gases we provide analytical expressions for zero- and first-order scattering kernels, σ0( E0 → E), σ1( E0 → E), and total cross section σ0( E0). Based on these quantities, we have performed calculations of thermalization parameters and transport coefficients for H 2O, D 2O, C 6H 6 and (CH 2) n at room temperature. Comparison of such values with available experimental data and other calculations is satisfactory. We also generated nuclear data libraries for H 2O with 47 thermal groups at 300 K and performed some benchmark calculations ( 235U, 239Pu, PWR cell and typical APWR cell); the resulting reactivities are compared with experimental data and ENDF/B-IV calculations.

  3. Reciprocity principle for scattered fields from discontinuities in waveguides.

    PubMed

    Pau, Annamaria; Capecchi, Danilo; Vestroni, Fabrizio

    2015-01-01

    This study investigates the scattering of guided waves from a discontinuity exploiting the principle of reciprocity in elastodynamics, written in a form that applies to waveguides. The coefficients of reflection and transmission for an arbitrary mode can be derived as long as the principle of reciprocity is satisfied at the discontinuity. Two elastodynamic states are related by the reciprocity. One is the response of the waveguide in the presence of the discontinuity, with the scattered fields expressed as a superposition of wave modes. The other state is the response of the waveguide in the absence of the discontinuity oscillating according to an arbitrary mode. The semi-analytical finite element method is applied to derive the needed dispersion relation and wave mode shapes. An application to a solid cylinder with a symmetric double change of cross-section is presented. This model is assumed to be representative of a damaged rod. The coefficients of reflection and transmission of longitudinal waves are investigated for selected values of notch length and varying depth. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. The single-scattering properties of black carbon aggregates determined from the geometric-optics surface-wave approach and the T-matrix method

    NASA Astrophysics Data System (ADS)

    Takano, Y.; Liou, K. N.; Kahnert, M.; Yang, P.

    2013-08-01

    The single-scattering properties of eight black carbon (BC, soot) fractal aggregates, composed of primary spheres from 7 to 600, computed by the geometric-optics surface-wave (GOS) approach coupled with the Rayleigh-Gans-Debye (RGD) adjustment for size parameters smaller than approximately 2, are compared with those determined from the superposition T-matrix method. We show that under the condition of random orientation, the results from GOS/RGD are in general agreement with those from T-matrix in terms of the extinction and absorption cross-sections, the single-scattering co-albedo, and the asymmetry factor. When compared with the specific absorption (m2/g) measured in the laboratory, we illustrate that using the observed radii of primary spheres ranging from 3.3 to 25 nm, the theoretical values determined from GOS/RGD for primary sphere numbers of 100-600 are within the range of measured values. The GOS approach can be effectively applied to aggregates composed of a large number of primary spheres (e.g., >6000) and large size parameters (≫2) in terms of computational efforts.

  5. Multiple scattering in planetary regoliths using first-order incoherent interactions

    NASA Astrophysics Data System (ADS)

    Muinonen, Karri; Markkanen, Johannes; Väisänen, Timo; Penttilä, Antti

    2017-10-01

    We consider scattering of light by a planetary regolith modeled using discrete random media of spherical particles. The size of the random medium can range from microscopic sizes of a few wavelengths to macroscopic sizes approaching infinity. The size of the particles is assumed to be of the order of the wavelength. We extend the numerical Monte Carlo method of radiative transfer and coherent backscattering (RT-CB) to the case of dense packing of particles. We adopt the ensemble-averaged first-order incoherent extinction, scattering, and absorption characteristics of a volume element of particles as input for the RT-CB. The volume element must be larger than the wavelength but smaller than the mean free path length of incoherent extinction. In the radiative transfer part, at each absorption and scattering process, we account for absorption with the help of the single-scattering albedo and peel off the Stokes parameters of radiation emerging from the medium in predefined scattering angles. We then generate a new scattering direction using the joint probability density for the local polar and azimuthal scattering angles. In the coherent backscattering part, we utilize amplitude scattering matrices along the radiative-transfer path and the reciprocal path, and utilize the reciprocity of electromagnetic waves to verify the computation. We illustrate the incoherent volume-element scattering characteristics and compare the dense-medium RT-CB to asymptotically exact results computed using the Superposition T-matrix method (STMM). We show that the dense-medium RT-CB compares favorably to the STMM results for the current cases of sparse and dense discrete random media studied. The novel method can be applied in modeling light scattering by the surfaces of asteroids and other airless solar system objects, including UV-Vis-NIR spectroscopy, photometry, polarimetry, and radar scattering problems.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.

  6. Characterization of Compton-scatter imaging with an analytical simulation method

    PubMed Central

    Jones, Kevin C; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V; Chu, James C H

    2018-01-01

    By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140–220 keV, and 40–50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min−1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images. PMID:29243663

  7. Characterization of Compton-scatter imaging with an analytical simulation method

    NASA Astrophysics Data System (ADS)

    Jones, Kevin C.; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V.; Chu, James C. H.

    2018-01-01

    By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140-220 keV, and 40-50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min-1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images.

  8. Three-axis digital holographic microscopy for high speed volumetric imaging.

    PubMed

    Saglimbeni, F; Bianchi, S; Lepore, A; Di Leonardo, R

    2014-06-02

    Digital Holographic Microscopy allows to numerically retrieve three dimensional information encoded in a single 2D snapshot of the coherent superposition of a reference and a scattered beam. Since no mechanical scans are involved, holographic techniques have a superior performance in terms of achievable frame rates. Unfortunately, numerical reconstructions of scattered field by back-propagation leads to a poor axial resolution. Here we show that overlapping the three numerical reconstructions obtained by tilted red, green and blue beams results in a great improvement over the axial resolution and sectioning capabilities of holographic microscopy. A strong reduction in the coherent background noise is also observed when combining the volumetric reconstructions of the light fields at the three different wavelengths. We discuss the performance of our technique with two test objects: an array of four glass beads that are stacked along the optical axis and a freely diffusing rod shaped E.coli bacterium.

  9. Adaptive wavefront shaping for controlling nonlinear multimode interactions in optical fibres

    NASA Astrophysics Data System (ADS)

    Tzang, Omer; Caravaca-Aguirre, Antonio M.; Wagner, Kelvin; Piestun, Rafael

    2018-06-01

    Recent progress in wavefront shaping has enabled control of light propagation inside linear media to focus and image through scattering objects. In particular, light propagation in multimode fibres comprises complex intermodal interactions and rich spatiotemporal dynamics. Control of physical phenomena in multimode fibres and its applications are in their infancy, opening opportunities to take advantage of complex nonlinear modal dynamics. Here, we demonstrate a wavefront shaping approach for controlling nonlinear phenomena in multimode fibres. Using a spatial light modulator at the fibre input, real-time spectral feedback and a genetic algorithm optimization, we control a highly nonlinear multimode stimulated Raman scattering cascade and its interplay with four-wave mixing via a flexible implicit control on the superposition of modes coupled into the fibre. We show versatile spectrum manipulations including shifts, suppression, and enhancement of Stokes and anti-Stokes peaks. These demonstrations illustrate the power of wavefront shaping to control and optimize nonlinear wave propagation.

  10. Ion blocking dip shape analysis around a LaAlO3/SrTiO3 interface

    NASA Astrophysics Data System (ADS)

    Jalabert, D.; Zaid, H.; Berger, M. H.; Fongkaew, I.; Lambrecht, W. R. L.; Sehirlioglu, A.

    2018-05-01

    We present an analysis of the widths of the blocking dips obtained in MEIS ion blocking experiments of two LaAlO3/SrTiO3 heterostructures differing in their LaAlO3 layer thicknesses. In the LaAlO3 layers, the observed blocking dips are larger than expected. This enlargement is the result of the superposition of individual dips at slightly different angular positions revealing a local disorder in the atomic alignment, i.e., layer buckling. By contrast, in the SrTiO3 substrate, just below the interface, the obtained blocking dips are thinner than expected. This thinning indicates that the blocking atoms stand at a larger distance from the scattering center than expected. This is attributed to an accumulation of Sr vacancies at the layer/substrate interface which induces lattice distortions shifting the atoms off the scattering plane.

  11. Transmission Magnitude and Phase Control for Polarization-Preserving Reflectionless Metasurfaces

    NASA Astrophysics Data System (ADS)

    Kwon, Do-Hoon; Ptitcyn, Grigorii; Díaz-Rubio, Ana; Tretyakov, Sergei A.

    2018-03-01

    For transmissive applications of electromagnetic metasurfaces, an array of subwavelength Huygens' meta-atoms are typically used to eliminate reflection and achieve a high-transmission power efficiency together with a wide transmission phase coverage. We show that the underlying principle of low reflection and full control over transmission is asymmetric scattering into the specular reflection and transmission directions that results from a superposition of symmetric and antisymmetric scattering components, with Huygens' meta-atoms being one example configuration. Available for oblique illumination in TM polarization, a meta-atom configuration comprising normal and tangential electric polarizations is presented, which is capable of reflectionless, full-power transmission and a 2 π transmission phase coverage as well as full absorption. For lossy metasurfaces, we show that a complete phase coverage is still available for reflectionless designs for any value of absorptance. Numerical examples in the microwave and optical regimes are provided.

  12. Optimization method of superpixel analysis for multi-contrast Jones matrix tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki

    2017-02-01

    Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.

  13. The neutronic design and performance of the Indiana University Cyclotron Facility (IUCF) Low Energy Neutron Source (LENS)

    NASA Astrophysics Data System (ADS)

    Lavelle, Christopher M.

    Neutron scattering research is performed primarily at large-scale facilities. However, history has shown that smaller scale neutron scattering facilities can play a useful role in education and innovation while performing valuable materials research. This dissertation details the design and experimental validation of the LENS TMR as an example for a small scale accelerator driven neutron source. LENS achieves competitive long wavelength neutron intensities by employing a novel long pulse mode of operation, where the neutron production target is irradiated on a time scale comparable to the emission time of neutrons from the system. Monte Carlo methods have been employed to develop a design for optimal production of long wavelength neutrons from the 9Be(p,n) reaction at proton energies ranging from 7 to 13 MeV proton energy. The neutron spectrum was experimentally measured using time of flight, where it is found that the impact of the long pulse mode on energy resolution can be eliminated at sub-eV neutron energies if the emission time distribution of neutron from the system is known. The emission time distribution from the TMR system is measured using a time focussed crystal analyzer. Emission time of the fundamental cold neutron mode is found to be consistent with Monte Carlo results. The measured thermal neutron spectrum from the water reflector is found to be in agreement with Monte Carlo predictions if the scattering kernels employed are well established. It was found that the scattering kernels currently employed for cryogenic methane are inadequate for accurate prediction of the cold neutron intensity from the system. The TMR and neutronic modeling have been well characterized and the source design is flexible, such that it is possible for LENS to serve as an effective test bed for future work in neutronic development. Suggestions for improvements to the design that would allow increased neutron flux into the instruments are provided.

  14. Momentum and energy dependent resolution function of the ARCS neutron chopper spectrometer at high momentum transfer: Comparing simulation and experiment

    NASA Astrophysics Data System (ADS)

    Diallo, S. O.; Lin, J. Y. Y.; Abernathy, D. L.; Azuah, R. T.

    2016-11-01

    Inelastic neutron scattering at high momentum transfers (i.e. Q ≥ 20 A ˚), commonly known as deep inelastic neutron scattering (DINS), provides direct observation of the momentum distribution of light atoms, making it a powerful probe for studying single-particle motions in liquids and solids. The quantitative analysis of DINS data requires an accurate knowledge of the instrument resolution function Ri(Q , E) at each momentum Q and energy transfer E, where the label i indicates whether the resolution was experimentally observed i = obs or simulated i=sim. Here, we describe two independent methods for determining the total resolution function Ri(Q , E) of the ARCS neutron instrument at the Spallation Neutron Source, Oak Ridge National Laboratory. The first method uses experimental data from an archetypical system (liquid 4He) studied with DINS, which are then numerically deconvoluted using its previously determined intrinsic scattering function to yield Robs(Q , E). The second approach uses accurate Monte Carlo simulations of the ARCS spectrometer, which account for all instrument contributions, coupled to a representative scattering kernel to reproduce the experimentally observed response S(Q , E). Using a delta function as scattering kernel, the simulation yields a resolution function Rsim(Q , E) with comparable lineshape and features as Robs(Q , E), but somewhat narrower due to the ideal nature of the model. Using each of these two Ri(Q , E) separately, we extract characteristic parameters of liquid 4He such as the intrinsic linewidth α2 (which sets the atomic kinetic energy 〈 K 〉 ∼α2) in the normal liquid and the Bose-Einstein condensate parameter n0 in the superfluid phase. The extracted α2 values agree well with previous measurements at saturated vapor pressure (SVP) as well as at elevated pressure (24 bars) within experimental precision, independent of which Ri(Q , y) is used to analyze the data. The actual observed n0 values at each Q vary little with the model Ri(Q , E), and the effective Q-averaged n0 values are consistent with each other, and with previously reported values.

  15. Role of initial coherence in the generation of harmonics and sidebands from a strongly driven two-level atom

    NASA Astrophysics Data System (ADS)

    Gauthey, F. I.; Keitel, C. H.; Knight, P. L.; Maquet, A.

    1995-07-01

    We investigate the coherent and incoherent contributions of the scattering spectrum of strongly driven two-level atoms as a function of the initial preparation of the atomic system. The initial ``phasing'' of the coherent superposition of the excited and ground states is shown to influence strongly the generation of both harmonics and hyper-Raman lines. In particular, we point out conditions under which harmonic generation can be inhibited at the expense of the hyper-Raman lines. Our numerical findings are supported by approximate analytical evaluation in the dressed state picture.

  16. Narrowing of the balance function with centrality in Au+Au collisions at the square root of SNN = 130 GeV.

    PubMed

    Adams, J; Adler, C; Ahammed, Z; Allgower, C; Amonett, J; Anderson, B D; Anderson, M; Averichev, G S; Balewski, J; Barannikova, O; Barnby, L S; Baudot, J; Bekele, S; Belaga, V V; Bellwied, R; Berger, J; Bichsel, H; Billmeier, A; Bland, L C; Blyth, C O; Bonner, B E; Boucham, A; Brandin, A; Bravar, A; Cadman, R V; Caines, H; Calderónde la Barca Sánchez, M; Cardenas, A; Carroll, J; Castillo, J; Castro, M; Cebra, D; Chaloupka, P; Chattopadhyay, S; Chen, Y; Chernenko, S P; Cherney, M; Chikanian, A; Choi, B; Christie, W; Coffin, J P; Cormier, T M; Corral, M M; Cramer, J G; Crawford, H J; Derevschikov, A A; Didenko, L; Dietel, T; Draper, J E; Dunin, V B; Dunlop, J C; Eckardt, V; Efimov, L G; Emelianov, V; Engelage, J; Eppley, G; Erazmus, B; Fachini, P; Faine, V; Faivre, J; Fatemi, R; Filimonov, K; Finch, E; Fisyak, Y; Flierl, D; Foley, K J; Fu, J; Gagliardi, C A; Gagunashvili, N; Gans, J; Gaudichet, L; Germain, M; Geurts, F; Ghazikhanian, V; Grachov, O; Grigoriev, V; Guedon, M; Guertin, S M; Gushin, E; Hallman, T J; Hardtke, D; Harris, J W; Heinz, M; Henry, T W; Heppelmann, S; Herston, T; Hippolyte, B; Hirsch, A; Hjort, E; Hoffmann, G W; Horsley, M; Huang, H Z; Humanic, T J; Igo, G; Ishihara, A; Ivanshin, Yu I; Jacobs, P; Jacobs, W W; Janik, M; Johnson, I; Jones, P G; Judd, E G; Kaneta, M; Kaplan, M; Keane, D; Kiryluk, J; Kisiel, A; Klay, J; Klein, S R; Klyachko, A; Kollegger, T; Konstantinov, A S; Kopytine, M; Kotchenda, L; Kovalenko, A D; Kramer, M; Kravtsov, P; Krueger, K; Kuhn, C; Kulikov, A I; Kunde, G J; Kunz, C L; Kutuev, R Kh; Kuznetsov, A A; Lamont, M A C; Landgraf, J M; Lange, S; Lansdell, C P; Lasiuk, B; Laue, F; Lauret, J; Lebedev, A; Lednický, R; Leontiev, V M; LeVine, M J; Li, Q; Lindenbaum, S J; Lisa, M A; Liu, F; Liu, L; Liu, Z; Liu, Q J; Ljubicic, T; Llope, W J; Long, H; Longacre, R S; Lopez-Noriega, M; Love, W A; Ludlam, T; Lynn, D; Ma, J; Magestro, D; Majka, R; Margetis, S; Markert, C; Martin, L; Marx, J; Matis, H S; Matulenko, Yu A; McShane, T S; Meissner, F; Melnick, Yu; Meschanin, A; Messer, M; Miller, M L; Milosevich, Z; Minaev, N G; Mitchell, J; Moore, C F; Morozov, V; de Moura, M M; Munhoz, M G; Nelson, J M; Nevski, P; Nikitin, V A; Nogach, L V; Norman, B; Nurushev, S B; Odyniec, G; Ogawa, A; Okorokov, V; Oldenburg, M; Olson, D; Paic, G; Pandey, S U; Panebratsev, Y; Panitkin, S Y; Pavlinov, A I; Pawlak, T; Perevoztchikov, V; Peryt, W; Petrov, V A; Planinic, M; Pluta, J; Porile, N; Porter, J; Poskanzer, A M; Potrebenikova, E; Prindle, D; Pruneau, C; Putschke, J; Rai, G; Rakness, G; Ravel, O; Ray, R L; Razin, S V; Reichhold, D; Reid, J G; Renault, G; Retiere, F; Ridiger, A; Ritter, H G; Roberts, J B; Rogachevski, O V; Romero, J L; Rose, A; Roy, C; Rykov, V; Sakrejda, I; Salur, S; Sandweiss, J; Savin, I; Schambach, J; Scharenberg, R P; Schmitz, N; Schroeder, L S; Schüttauf, A; Schweda, K; Seger, J; Seliverstov, D; Seyboth, P; Shahaliev, E; Shestermanov, K E; Shimanskii, S S; Simon, F; Skoro, G; Smirnov, N; Snellings, R; Sorensen, P; Sowinski, J; Spinka, H M; Srivastava, B; Stephenson, E J; Stock, R; Stolpovsky, A; Strikhanov, M; Stringfellow, B; Struck, C; Suaide, A A P; Sugarbaker, E; Suire, C; Sumbera, M; Surrow, B; Symons, T J M; de Toledo, A Szanto; Szarwas, P; Tai, A; Takahashi, J; Tang, A H; Thein, D; Thomas, J H; Thompson, M; Tikhomirov, V; Tokarev, M; Tonjes, M B; Trainor, T A; Trentalange, S; Tribble, R E; Trofimov, V; Tsai, O; Ullrich, T; Underwood, D G; Van Buren, G; Vander Molen, A M; Vasilevski, I M; Vasiliev, A N; Vigdor, S E; Voloshin, S A; Wang, F; Ward, H; Watson, J W; Wells, R; Westfall, G D; Whitten, C; Wieman, H; Willson, R; Wissink, S W; Witt, R; Wood, J; Xu, N; Xu, Z; Yakutin, A E; Yamamoto, E; Yang, J; Yepes, P; Yurevich, V I; Zanevski, Y V; Zborovský, I; Zhang, H; Zhang, W M; Zoulkarneev, R; Zubarev, A N

    2003-05-02

    The balance function is a new observable based on the principle that charge is locally conserved when particles are pair produced. Balance functions have been measured for charged particle pairs and identified charged pion pairs in Au+Au collisions at the square root of SNN = 130 GeV at the Relativistic Heavy Ion Collider using STAR. Balance functions for peripheral collisions have widths consistent with model predictions based on a superposition of nucleon-nucleon scattering. Widths in central collisions are smaller, consistent with trends predicted by models incorporating late hadronization.

  17. Complex-valued derivative propagation method with approximate Bohmian trajectories: Application to electronic nonadiabatic dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Chou, Chia-Chun

    2018-05-01

    The coupled complex quantum Hamilton-Jacobi equations for electronic nonadiabatic transitions are approximately solved by propagating individual quantum trajectories in real space. Equations of motion are derived through use of the derivative propagation method for the complex actions and their spatial derivatives for wave packets moving on each of the coupled electronic potential surfaces. These equations for two surfaces are converted into the moving frame with the same grid point velocities. Excellent wave functions can be obtained by making use of the superposition principle even when nodes develop in wave packet scattering.

  18. Statistics of primordial density perturbations from discrete seed masses

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.; Bertschinger, Edmund

    1991-01-01

    The statistics of density perturbations for general distributions of seed masses with arbitrary matter accretion is examined. Formal expressions for the power spectrum, the N-point correlation functions, and the density distribution function are derived. These results are applied to the case of uncorrelated seed masses, and power spectra are derived for accretion of both hot and cold dark matter plus baryons. The reduced moments (cumulants) of the density distribution are computed and used to obtain a series expansion for the density distribution function. Analytic results are obtained for the density distribution function in the case of a distribution of seed masses with a spherical top-hat accretion pattern. More generally, the formalism makes it possible to give a complete characterization of the statistical properties of any random field generated from a discrete linear superposition of kernels. In particular, the results can be applied to density fields derived by smoothing a discrete set of points with a window function.

  19. Time-domain Brillouin scattering assisted by diffraction gratings

    NASA Astrophysics Data System (ADS)

    Matsuda, Osamu; Pezeril, Thomas; Chaban, Ievgeniia; Fujita, Kentaro; Gusev, Vitalyi

    2018-02-01

    Absorption of ultrashort laser pulses in a metallic grating deposited on a transparent sample launches coherent compression/dilatation acoustic pulses in directions of different orders of acoustic diffraction. Their propagation is detected by delayed laser pulses, which are also diffracted by the metallic grating, through the measurement of the transient intensity change of the first-order diffracted light. The obtained data contain multiple frequency components, which are interpreted by considering all possible angles for the Brillouin scattering of light achieved through multiplexing of the propagation directions of light and coherent sound by the metallic grating. The emitted acoustic field can be equivalently presented as a superposition of plane inhomogeneous acoustic waves, which constitute an acoustic diffraction grating for the probe light. Thus the obtained results can also be interpreted as a consequence of probe light diffraction by both metallic and acoustic gratings. The realized scheme of time-domain Brillouin scattering with metallic gratings operating in reflection mode provides access to wide range of acoustic frequencies from minimal to maximal possible values in a single experimental optical configuration for the directions of probe light incidence and scattered light detection. This is achieved by monitoring the backward and forward Brillouin scattering processes in parallel. Potential applications include measurements of the acoustic dispersion, simultaneous determination of sound velocity and optical refractive index, and evaluation of samples with a single direction of possible optical access.

  20. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    PubMed

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was clearly improved with MC-based OSEM reconstruction, e.g., the activity recovery was 88% for the largest sphere, while it was 66% for AC-OSEM and 79% for RRC-OSEM. The GPU-based MC code generated an MC-based SPECT/CT reconstruction within a few minutes, and reconstructed patient images of 177 Lu-DOTATATE treatments revealed clearly improved resolution and contrast.

  1. Handling Density Conversion in TPS.

    PubMed

    Isobe, Tomonori; Mori, Yutaro; Takei, Hideyuki; Sato, Eisuke; Tadano, Kiichi; Kobayashi, Daisuke; Tomita, Tetsuya; Sakae, Takeji

    2016-01-01

    Conversion from CT value to density is essential to a radiation treatment planning system. Generally CT value is converted to the electron density in photon therapy. In the energy range of therapeutic photon, interactions between photons and materials are dominated with Compton scattering which the cross-section depends on the electron density. The dose distribution is obtained by calculating TERMA and kernel using electron density where TERMA is the energy transferred from primary photons and kernel is a volume considering spread electrons. Recently, a new method was introduced which uses the physical density. This method is expected to be faster and more accurate than that using the electron density. As for particle therapy, dose can be calculated with CT-to-stopping power conversion since the stopping power depends on the electron density. CT-to-stopping power conversion table is also called as CT-to-water-equivalent range and is an essential concept for the particle therapy.

  2. Scattering on a rectangular potential barrier in nodal-line Weyl semimetals

    NASA Astrophysics Data System (ADS)

    Khokhlov, D. A.; Rakhmanov, A. L.; Rozhkov, A. V.

    2018-06-01

    We investigate single-particle ballistic scattering on a rectangular barrier in the nodal-line Weyl semimetals. Since the system under study has a crystallographic anisotropy, the scattering properties are dependent on mutual orientation of the crystalline axis and the barrier. To account for the anisotropy, we examine two different barrier orientations. It is demonstrated that, for certain angles of incidence, the incoming particle passes through the barrier with probability of unity. This is a manifestation of the Klein tunneling, a familiar phenomenon in the context of graphene and semimetals with Weyl points. However, the Klein tunneling in the Weyl-ring systems is observed when the angle of incidence differs from 90∘, unlike the cases of graphene and Weyl-point semimetals. The reflectionless transmission also occurs for the so-called "magic angles." The values of the magic angles are determined by geometrical resonances between the barrier width and the de Broglie length of the scattered particle. In addition, we show that under certain conditions the wave function of the transmitted and reflected particles may be a superposition of two plane waves with unequal momenta. Such a feature is a consequence of the nontrivial structure of the isoenergy surfaces of the nodal-line semimetals. Conductance of the barrier is briefly discussed.

  3. Compliant energy and momentum conservation in NEGF simulation of electron-phonon scattering in semiconductor nano-wire transistors

    NASA Astrophysics Data System (ADS)

    Barker, J. R.; Martinez, A.; Aldegunde, M.

    2012-05-01

    The modelling of spatially inhomogeneous silicon nanowire field-effect transistors has benefited from powerful simulation tools built around the Keldysh formulation of non-equilibrium Green function (NEGF) theory. The methodology is highly efficient for situations where the self-energies are diagonal (local) in space coordinates. It has thus been common practice to adopt diagonality (locality) approximations. We demonstrate here that the scattering kernel that controls the self-energies for electron-phonon interactions is generally non-local on the scale of at least a few lattice spacings (and thus within the spatial scale of features in extreme nano-transistors) and for polar optical phonon-electron interactions may be very much longer. It is shown that the diagonality approximation strongly under-estimates the scattering rates for scattering on polar optical phonons. This is an unexpected problem in silicon devices but occurs due to strong polar SO phonon-electron interactions extending into a narrow silicon channel surrounded by high kappa dielectric in wrap-round gate devices. Since dissipative inelastic scattering is already a serious problem for highly confined devices it is concluded that new algorithms need to be forthcoming to provide appropriate and efficient NEGF tools.

  4. Using Monte Carlo ray tracing simulations to model the quantum harmonic oscillator modes observed in uranium nitride

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, J. Y. Y.; Aczel, Adam A; Abernathy, Douglas L

    2014-01-01

    Recently an extended series of equally spaced vibrational modes was observed in uranium nitride (UN) by performing neutron spectroscopy measurements using the ARCS and SEQUOIA time-of- flight chopper spectrometers [A.A. Aczel et al, Nature Communications 3, 1124 (2012)]. These modes are well described by 3D isotropic quantum harmonic oscillator (QHO) behavior of the nitrogen atoms, but there are additional contributions to the scattering that complicate the measured response. In an effort to better characterize the observed neutron scattering spectrum of UN, we have performed Monte Carlo ray tracing simulations of the ARCS and SEQUOIA experiments with various sample kernels, accountingmore » for the nitrogen QHO scattering, contributions that arise from the acoustic portion of the partial phonon density of states (PDOS), and multiple scattering. These simulations demonstrate that the U and N motions can be treated independently, and show that multiple scattering contributes an approximate Q-independent background to the spectrum at the oscillator mode positions. Temperature dependent studies of the lowest few oscillator modes have also been made with SEQUOIA, and our simulations indicate that the T-dependence of the scattering from these modes is strongly influenced by the uranium lattice.« less

  5. Space-time domain solutions of the wave equation by a non-singular boundary integral method and Fourier transform.

    PubMed

    Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C

    2017-08-01

    The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.

  6. Final Aperture Superposition Technique applied to fast calculation of electron output factors and depth dose curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faddegon, B.A.; Villarreal-Barajas, J.E.; Mt. Diablo Regional Cancer Center, 2450 East Street, Concord, California

    2005-11-15

    The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for amore » particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm{sup 2} inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm{sup 3} voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a maximum of 5.6% at 21 MeV. Contributions from the collimator effect were largest for the large field size, high beam energy, and shallow depths, reaching a maximum of 4.7% at 21 MeV. Both shielding contributions and the collimator effect need to be taken into account to achieve an accuracy of 2%. FAST takes explicit account of the shielding contributions. With the collimator effect set to that of the largest field in the FAST calculation, the difference in dose on the central axis (product of ROF and PDD) between FAST and full simulation was generally under 2%. The maximum difference of 2.5% exceeded the statistical precision of the calculation by four standard deviations. This occurred at 18 MeV for the 2.5x2.5 cm{sup 2} field. The differences are due to the method used to account for the collimator effect.« less

  7. Ultrasound scatter in heterogeneous 3D microstructures: Parameters affecting multiple scattering

    NASA Astrophysics Data System (ADS)

    Engle, B. J.; Roberts, R. A.; Grandin, R. J.

    2018-04-01

    This paper reports on a computational study of ultrasound propagation in heterogeneous metal microstructures. Random spatial fluctuations in elastic properties over a range of length scales relative to ultrasound wavelength can give rise to scatter-induced attenuation, backscatter noise, and phase front aberration. It is of interest to quantify the dependence of these phenomena on the microstructure parameters, for the purpose of quantifying deleterious consequences on flaw detectability, and for the purpose of material characterization. Valuable tools for estimation of microstructure parameters (e.g. grain size) through analysis of ultrasound backscatter have been developed based on approximate weak-scattering models. While useful, it is understood that these tools display inherent inaccuracy when multiple scattering phenomena significantly contribute to the measurement. It is the goal of this work to supplement weak scattering model predictions with corrections derived through application of an exact computational scattering model to explicitly prescribed microstructures. The scattering problem is formulated as a volume integral equation (VIE) displaying a convolutional Green-function-derived kernel. The VIE is solved iteratively employing FFT-based con-volution. Realizations of random microstructures are specified on the micron scale using statistical property descriptions (e.g. grain size and orientation distributions), which are then spatially filtered to provide rigorously equivalent scattering media on a length scale relevant to ultrasound propagation. Scattering responses from ensembles of media representations are averaged to obtain mean and variance of quantities such as attenuation and backscatter noise levels, as a function of microstructure descriptors. The computational approach will be summarized, and examples of application will be presented.

  8. Xyloglucans from flaxseed kernel cell wall: Structural and conformational characterisation.

    PubMed

    Ding, Huihuang H; Cui, Steve W; Goff, H Douglas; Chen, Jie; Guo, Qingbin; Wang, Qi

    2016-10-20

    The structure of ethanol precipitated fraction from 1M KOH extracted flaxseed kernel polysaccharides (KPI-EPF) was studied for better understanding the molecular structures of flaxseed kernel cell wall polysaccharides. Based on methylation/GC-MS, NMR spectroscopy, and MALDI-TOF-MS analysis, the dominate sugar residues of KPI-EPF fraction comprised of (1,4,6)-linked-β-d-glucopyranose (24.1mol%), terminal α-d-xylopyranose (16.2mol%), (1,2)-α-d-linked-xylopyranose (10.7mol%), (1,4)-β-d-linked-glucopyranose (10.7mol%), and terminal β-d-galactopyranose (8.5mol%). KPI-EPF was proposed as xyloglucans: The substitution rate of the backbone is 69.3%; R1 could be T-α-d-Xylp-(1→, or none; R2 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, or T-α-l-Araf-(1→2)-α-d-Xylp-(1→; R3 could be T-α-d-Xylp-(1→, T-β-d-Galp-(1→2)-α-d-Xylp-(1→, T-α-l-Fucp-(1→2)-β-d-Galp-(1→2)-α-d-Xylp-(1→, or none. The Mw of KPI-EPF was calculated to be 1506kDa by static light scattering (SLS). The structure-sensitive parameter (ρ) of KPI-EPF was calculated as 1.44, which confirmed the highly branched structure of extracted xyloglucans. This new findings on flaxseed kernel xyloglucans will be helpful for understanding its fermentation properties and potential applications. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  9. Effects of velocity-changing collisions on two-photon and stepwise-absorption spectroscopic line shapes

    NASA Astrophysics Data System (ADS)

    Liao, P. F.; Bjorkholm, J. E.; Berman, P. R.

    1980-06-01

    We report the results of an experimental study of the effects of velocity-changing collisions on two-photon and stepwise-absorption line shapes. Excitation spectra for the 3S12-->3P12-->4D12 transitions of sodium atoms undergoing collisions with foreign gas perturbers are obtained. These spectra are obtained with two cw dye lasers. One laser, the pump laser, is tuned 1.6 GHz below the 3S12-->3P12 transition frequency and excites a nonthermal longitudinal velocity distribution of excited 3P12 atoms in the vapor. Absorption of the second (probe) laser is used to monitor the steady-state excited-state distribution which is a result of collisions with rare gas atoms. The spectra are obtained for various pressures of He, Ne, and Kr gases and are fit to a theoretical model which utilizes either the phenomenological Keilson-Störer or the classical hardsphere collision kernel. The theoretical model includes the effects of collisionally aided excitation of the 3P12 state as well as effects due to fine-structure state-changing collisions. Although both kernels are found to predict line shapes which are in reasonable agreement with the experimental results, the hard-sphere kernel is found superior as it gives a better description of the effects of large-angle scattering for heavy perturbers. Neither kernel provides a fully adequate description over the entire line profile. The experimental data is used to extract effective hard-sphere collision cross sections for collisions between sodium 3P12 atoms and helium, neon, and krypton perturbers.

  10. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  11. Fine Structure of the Low-Frequency Raman Phonon Bands of Single-Wall Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Iliev, M. N.; Litvinchuk, A. P.; Arepalli, S.; Nikolaev, P.; Scott, C. D.

    1999-01-01

    The Raman spectra of singled-wall carbon nanotubes (SWNT) produced by laser and are process were studied between 5 and 500 kappa. The line width vs. temperature dependence of the low-frequency Raman bands between 150 and 200/ cm deviates from that expected for phonon decay through phonon-phonon scattering mechanism. The experimental results and their analysis provided convincing evidence that each of the low-frequency Raman lines is a superposition of several narrower Raman lines corresponding to tubes of nearly the same diameter. The application of Raman spectroscopy to probe the distribution of SWNT by both diameter and chirality is discussed.

  12. Coherent Backscattering by Polydisperse Discrete Random Media: Exact T-Matrix Results

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Mackowski, Daniel W.

    2011-01-01

    The numerically exact superposition T-matrix method is used to compute, for the first time to our knowledge, electromagnetic scattering by finite spherical volumes composed of polydisperse mixtures of spherical particles with different size parameters or different refractive indices. The backscattering patterns calculated in the far-field zone of the polydisperse multiparticle volumes reveal unequivocally the classical manifestations of the effect of weak localization of electromagnetic waves in discrete random media, thereby corroborating the universal interference nature of coherent backscattering. The polarization opposition effect is shown to be the least robust manifestation of weak localization fading away with increasing particle size parameter.

  13. Morphology-Dependent Resonances of Spherical Droplets with Numerous Microscopic Inclusions

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2014-01-01

    We use the recently extended superposition T-matrix method to study the behavior of a sharp Lorenz-Mie resonance upon filling a spherical micrometer-sized droplet with tens and hundreds of randomly positioned microscopic inclusions. We show that as the number of inclusions increases, the extinction cross-section peak and the sharp asymmetry-parameter minimum become suppressed, widen, and move toward smaller droplet size parameters, while ratios of diagonal elements of the scattering matrix exhibit sharp angular features indicative of a distinctly nonspherical particle. Our results highlight the limitedness of the concept of an effective refractive index of an inhomogeneous spherical particle.

  14. Coherent response of a semiconductor microcavity in the strong coupling regime

    NASA Astrophysics Data System (ADS)

    Cassabois, G.; Triques, A. L. C.; Ferreira, R.; Delalande, C.; Roussignol, Ph; Bogani, F.

    2000-05-01

    We have studied the coherent dynamics of a semiconductor microcavity by means of interferometric correlation measurements with subpicosecond time resolution in a backscattering geometry. Evidence is brought of the resolution of a homogeneous polariton line in an inhomogeneously broadened exciton system. Surprisingly, photon-like polaritons exhibit an inhomogeneous dephasing. Moreover, we observe an unexpected stationary coherence up to 8 ps for the lower polariton branch close to resonance. All these experimental results are well reproduced within the framework of a linear dispersion theory assuming a coherent superposition of the reflectivity and resonant Rayleigh scattering signals with a well-defined relative phase.

  15. Reducing disk storage of full-3D seismic waveform tomography (F3DT) through lossy online compression

    NASA Astrophysics Data System (ADS)

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-08-01

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithm into our F3DT-SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.

  16. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.

  17. Reducing Disk Storage of Full-3D Seismic Waveform Tomography (F3DT) Through Lossy Online Compression

    DOE PAGES

    Lindstrom, Peter; Chen, Po; Lee, En-Jui

    2016-05-05

    Full-3D seismic waveform tomography (F3DT) is the latest seismic tomography technique that can assimilate broadband, multi-component seismic waveform observations into high-resolution 3D subsurface seismic structure models. The main drawback in the current F3DT implementation, in particular the scattering-integral implementation (F3DT-SI), is the high disk storage cost and the associated I/O overhead of archiving the 4D space-time wavefields of the receiver- or source-side strain tensors. The strain tensor fields are needed for computing the data sensitivity kernels, which are used for constructing the Jacobian matrix in the Gauss-Newton optimization algorithm. In this study, we have successfully integrated a lossy compression algorithmmore » into our F3DT SI workflow to significantly reduce the disk space for storing the strain tensor fields. The compressor supports a user-specified tolerance for bounding the error, and can be integrated into our finite-difference wave-propagation simulation code used for computing the strain fields. The decompressor can be integrated into the kernel calculation code that reads the strain fields from the disk and compute the data sensitivity kernels. During the wave-propagation simulations, we compress the strain fields before writing them to the disk. To compute the data sensitivity kernels, we read the compressed strain fields from the disk and decompress them before using them in kernel calculations. Experiments using a realistic dataset in our California statewide F3DT project have shown that we can reduce the strain-field disk storage by at least an order of magnitude with acceptable loss, and also improve the overall I/O performance of the entire F3DT-SI workflow significantly. The integration of the lossy online compressor may potentially open up the possibilities of the wide adoption of F3DT-SI in routine seismic tomography practices in the near future.« less

  18. An analytical dose-averaged LET calculation algorithm considering the off-axis LET enhancement by secondary protons for spot-scanning proton therapy.

    PubMed

    Hirayama, Shusuke; Matsuura, Taeko; Ueda, Hideaki; Fujii, Yusuke; Fujii, Takaaki; Takao, Seishin; Miyamoto, Naoki; Shimizu, Shinichi; Fujimoto, Rintaro; Umegaki, Kikuo; Shirato, Hiroki

    2018-05-22

    To evaluate the biological effects of proton beams as part of daily clinical routine, fast and accurate calculation of dose-averaged linear energy transfer (LET d ) is required. In this study, we have developed the analytical LET d calculation method based on the pencil-beam algorithm (PBA) considering the off-axis enhancement by secondary protons. This algorithm (PBA-dLET) was then validated using Monte Carlo simulation (MCS) results. In PBA-dLET, LET values were assigned separately for each individual dose kernel based on the PBA. For the dose kernel, we employed a triple Gaussian model which consists of the primary component (protons that undergo the multiple Coulomb scattering) and the halo component (protons that undergo inelastic, nonelastic and elastic nuclear reaction); the primary and halo components were represented by a single Gaussian and the sum of two Gaussian distributions, respectively. Although the previous analytical approaches assumed a constant LET d value for the lateral distribution of a pencil beam, the actual LET d increases away from the beam axis, because there are more scattered and therefore lower energy protons with higher stopping powers. To reflect this LET d behavior, we have assumed that the LETs of primary and halo components can take different values (LET p and LET halo ), which vary only along the depth direction. The values of dual-LET kernels were determined such that the PBA-dLET reproduced the MCS-generated LET d distribution in both small and large fields. These values were generated at intervals of 1 mm in depth for 96 energies from 70.2 to 220 MeV and collected in the look-up table. Finally, we compared the LET d distributions and mean LET d (LET d,mean ) values of targets and organs at risk between PBA-dLET and MCS. Both homogeneous phantom and patient geometries (prostate, liver, and lung cases) were used to validate the present method. In the homogeneous phantom, the LET d profiles obtained by the dual-LET kernels agree well with the MCS results except for the low-dose region in the lateral penumbra, where the actual dose was below 10% of the maximum dose. In the patient geometry, the LET d profiles calculated with the developed method reproduces MCS with the similar accuracy as in the homogeneous phantom. The maximum differences in LET d,mean for each structure between the PBA-dLET and the MCS were 0.06 keV/μm in homogeneous phantoms and 0.08 keV/μm in patient geometries under all tested conditions, respectively. We confirmed that the dual-LET-kernel model well reproduced the MCS, not only in the homogeneous phantom but also in complex patient geometries. The accuracy of the LET d was largely improved from the single-LET-kernel model, especially at the lateral penumbra. The model is expected to be useful, especially for proper recognition of the risk of side effects when the target is next to critical organs. © 2018 American Association of Physicists in Medicine.

  19. Imaging Through Random Discrete-Scatterer Dispersive Media

    DTIC Science & Technology

    2015-08-27

    to that of a conventional, continuous, linear - frequency-modulated chirped signal [3]. Chirped train signals are a particular realization of a class of...continuous chirp signals, characterized by linear frequency modulation [3], we assume the time instances tn to be given by 1 tn = τg ( 1− βg n 2Ng ) n...kernel Dn(z) [9] by sincN (z) = (N + 1)−1DN/2(2πz/N). DISTRIBUTION A: Distribution approved for public release. 4 We use the elementary identity5 π sin

  20. Java Radar Analysis Tool

    NASA Technical Reports Server (NTRS)

    Zaczek, Mariusz P.

    2005-01-01

    Java Radar Analysis Tool (JRAT) is a computer program for analyzing two-dimensional (2D) scatter plots derived from radar returns showing pieces of the disintegrating Space Shuttle Columbia. JRAT can also be applied to similar plots representing radar returns showing aviation accidents, and to scatter plots in general. The 2D scatter plots include overhead map views and side altitude views. The superposition of points in these views makes searching difficult. JRAT enables three-dimensional (3D) viewing: by use of a mouse and keyboard, the user can rotate to any desired viewing angle. The 3D view can include overlaid trajectories and search footprints to enhance situational awareness in searching for pieces. JRAT also enables playback: time-tagged radar-return data can be displayed in time order and an animated 3D model can be moved through the scene to show the locations of the Columbia (or other vehicle) at the times of the corresponding radar events. The combination of overlays and playback enables the user to correlate a radar return with a position of the vehicle to determine whether the return is valid. JRAT can optionally filter single radar returns, enabling the user to selectively hide or highlight a desired radar return.

  1. Refinement of the crystal structure of the high-temperature phase G0 in (NH4)2WO2F4 (powder, x-ray, and neutron scattering)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novak, D. M.; Smirnov, Lev S; Kolesnikov, Alexander I

    2013-01-01

    The (NH4)2WO2F4 compound undergoes a series of phase transitions: G0 -> 201 K -> G1 -> 160 K -> G2, with a significant change in entropy ( S1 ~ Rln10 at the G0 -> G1 transition), which indicates significant orientational disordering in the G0 phase and the order disorder type of the phase transition. X-ray diffraction is used to identify the crystal structure of the G0 phase as rhombohedral (sp. gr. Cmcm, Z = 4), determine the lattice parameters and the positions of all atoms (except hydrogen), and show that [WO2F4]2 ions can form a superposition of dynamic and staticmore » orientational disorders in the anionic sublattice. A determination of the orientational position of [NH4]+ ions calls for the combined method of elastic and inelastic neutron scattering. Inelastic neutron scattering is used to determine the state of hindered rotation for ammonium ions in the G0 phase. Powder neutron diffraction shows that the orientational disorder of NH4 ions can adequately be described within the free rotation approximation.« less

  2. Surface-enhanced Raman scattering (SERS) of riboflavin on nanostructured Ag surfaces: The role of excitation wavelength, plasmon resonance and molecular resonance

    NASA Astrophysics Data System (ADS)

    Šubr, Martin; Kuzminova, Anna; Kylián, Ondřej; Procházka, Marek

    2018-05-01

    Optimization of surface-enhanced Raman scattering (SERS)-based sensors for (bio)analytical applications has received much attention in recent years. For optimum sensitivity, both the nanostructure fabrication process and the choice of the excitation wavelength used with respect to the specific analyte studied are of crucial importance. In this contribution, detailed SERS intensity profiles were measured using gradient nanostructures with the localized surface-plasmon resonance (LSPR) condition varying across the sample length and using riboflavin as the model biomolecule. Three different excitation wavelengths (633 nm, 515 nm and 488 nm) corresponding to non-resonance, pre-resonance and resonance excitation with respect to the studied molecule, respectively, were tested. Results were interpreted in terms of a superposition of the enhancement provided by the electromagnetic mechanism and intrinsic properties of the SERS probe molecule. The first effect was dictated mainly by the degree of spectral overlap between the LSPR band, the excitation wavelength along with the scattering cross-section of the nanostructures, while the latter was influenced by the position of the molecular resonance with respect to the excitation wavelength. Our experimental findings contribute to a better understanding of the SERS enhancement mechanism.

  3. Three-body spectrum in a finite volume: The role of cubic symmetry

    DOE PAGES

    Doring, M.; Hammer, H. -W.; Mai, M.; ...

    2018-06-15

    The three-particle quantization condition is partially diagonalized in the center-of-mass frame by using cubic symmetry on the lattice. To this end, instead of spherical harmonics, the kernel of the Bethe-Salpeter equation for particle-dimer scattering is expanded in the basis functions of different irreducible representations of the octahedral group. Such a projection is of particular importance for the three-body problem in the finite volume due to the occurrence of three-body singularities above breakup. Additionally, we study the numerical solution and properties of such a projected quantization condition in a simple model. It is shown that, for large volumes, these solutions allowmore » for an instructive interpretation of the energy eigenvalues in terms of bound and scattering states.« less

  4. Three-body spectrum in a finite volume: The role of cubic symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doring, M.; Hammer, H. -W.; Mai, M.

    The three-particle quantization condition is partially diagonalized in the center-of-mass frame by using cubic symmetry on the lattice. To this end, instead of spherical harmonics, the kernel of the Bethe-Salpeter equation for particle-dimer scattering is expanded in the basis functions of different irreducible representations of the octahedral group. Such a projection is of particular importance for the three-body problem in the finite volume due to the occurrence of three-body singularities above breakup. Additionally, we study the numerical solution and properties of such a projected quantization condition in a simple model. It is shown that, for large volumes, these solutions allowmore » for an instructive interpretation of the energy eigenvalues in terms of bound and scattering states.« less

  5. Photon-efficient super-resolution laser radar

    NASA Astrophysics Data System (ADS)

    Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.

    2017-08-01

    The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.

  6. Response Matrix Monte Carlo for electron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.

    1990-11-01

    A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less

  7. O Some Theoretical Studies and Applications of Light Scattering by Small Particles

    NASA Astrophysics Data System (ADS)

    Zhan, Jiyu

    1992-01-01

    A theoretical study of the interference structure of the Mie extinction cross section Q_{ rm ext} is presented. For real refractive indices m < 2.5 the dominant term of Q _{rm ext} has an x dependence of the form sin^2 ((m - 1)x), leading to the periodicity of Deltax = pi/(m - 1). At m > 2.5 the Q _{rm ext} curve does not have a simple periodic structure. Analytical expression for absorption and scattering coefficients of polydispersion of hexagonal plates, that are used to model fluffy snow flakes, are derived by the anomalous diffraction approximation (ADA). The results are within 12% accuracy when compared to the calculations of the superposition of dipoles method. A method for measuring the real part of the refractive indices of phytoplankton, bacteria or other particulate material suspended in seawater is developed based on the ADA. The accuracy in determining the real part of the refractive index is around 0.005.

  8. Topological spin-hedgehog crystals of a chiral magnet as engineered with magnetic anisotropy

    NASA Astrophysics Data System (ADS)

    Kanazawa, N.; White, J. S.; Rønnow, H. M.; Dewhurst, C. D.; Morikawa, D.; Shibata, K.; Arima, T.; Kagawa, F.; Tsukazaki, A.; Kozuka, Y.; Ichikawa, M.; Kawasaki, M.; Tokura, Y.

    2017-12-01

    We report the engineering of spin-hedgehog crystals in thin films of the chiral magnet MnGe by tailoring the magnetic anisotropy. As evidenced by neutron scattering on films with different thicknesses and by varying a magnetic field, we can realize continuously deformable spin-hedgehog crystals, each of which is described as a superposition state of a different set of three spin spirals (a triple-q state). The directions of the three propagation vectors q vary systematically, gathering from the three orthogonal 〈100 〉 directions towards the film normal as the strength of the uniaxial magnetic anisotropy and/or the magnetic field applied along the film normal increase. The formation of triple-q states coincides with the onset of topological Hall signals, that are ascribed to skew scattering by an emergent magnetic field originating in the nontrivial topology of spin hedgehogs. These findings highlight how nanoengineering of chiral magnets makes possible the rational design of unique topological spin textures.

  9. An IBEM solution to the scattering of plane SH-waves by a lined tunnel in elastic wedge space

    NASA Astrophysics Data System (ADS)

    Liu, Zhongxian; Liu, Lei

    2015-02-01

    The indirect boundary element method (IBEM) is developed to solve the scattering of plane SH-waves by a lined tunnel in elastic wedge space. According to the theory of single-layer potential, the scattered-wave field can be constructed by applying virtual uniform loads on the surface of lined tunnel and the nearby wedge surface. The densities of virtual loads can be solved by establishing equations through the continuity conditions on the interface and zero-traction conditions on free surfaces. The total wave field is obtained by the superposition of free field and scattered-wave field in elastic wedge space. Numerical results indicate that the IBEM can solve the diffraction of elastic wave in elastic wedge space accurately and efficiently. The wave motion feature strongly depends on the wedge angle, the angle of incidence, incident frequency, the location of lined tunnel, and material parameters. The waves interference and amplification effect around the tunnel in wedge space is more significant, causing the dynamic stress concentration factor on rigid tunnel and the displacement amplitude of flexible tunnel up to 50.0 and 17.0, respectively, more than double that of the case of half-space. Hence, considerable attention should be paid to seismic resistant or anti-explosion design of the tunnel built on a slope or hillside.

  10. Efficient Strategies for Estimating the Spatial Coherence of Backscatter

    PubMed Central

    Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.

    2017-01-01

    The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342

  11. Ford Motor Company NDE facility shielding design.

    PubMed

    Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H

    2005-01-01

    Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.

  12. A spectral boundary integral equation method for the 2-D Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.

    1994-01-01

    In this paper, we present a new numerical formulation of solving the boundary integral equations reformulated from the Helmholtz equation. The boundaries of the problems are assumed to be smooth closed contours. The solution on the boundary is treated as a periodic function, which is in turn approximated by a truncated Fourier series. A Fourier collocation method is followed in which the boundary integral equation is transformed into a system of algebraic equations. It is shown that in order to achieve spectral accuracy for the numerical formulation, the nonsmoothness of the integral kernels, associated with the Helmholtz equation, must be carefully removed. The emphasis of the paper is on investigating the essential elements of removing the nonsmoothness of the integral kernels in the spectral implementation. The present method is robust for a general boundary contour. Aspects of efficient implementation of the method using FFT are also discussed. A numerical example of wave scattering is given in which the exponential accuracy of the present numerical method is demonstrated.

  13. Consistent Pl Analysis of Aqueous Uranium-235 Critical Assemblies

    NASA Technical Reports Server (NTRS)

    Fieno, Daniel

    1961-01-01

    The lethargy-dependent equations of the consistent Pl approximation to the Boltzmann transport equation for slowing down neutrons have been used as the basis of an IBM 704 computer program. Some of the effects included are (1) linearly anisotropic center of mass elastic scattering, (2) heavy element inelastic scattering based on the evaporation model of the nucleus, and (3) optional variation of the buckling with lethargy. The microscopic cross-section data developed for this program covered 473 lethargy points from lethargy u = 0 (10 Mev) to u = 19.8 (0.025 ev). The value of the fission neutron age in water calculated here is 26.5 square centimeters; this value is to be compared with the recent experimental value given as 27.86 square centimeters. The Fourier transform of the slowing-down kernel for water to indium resonance energy calculated here compared well with the Fourier transform of the kernel for water as measured by Hill, Roberts, and Fitch. This method of calculation has been applied to uranyl fluoride - water solution critical assemblies. Theoretical results established for both unreflected and fully reflected critical assemblies have been compared with available experimental data. The theoretical buckling curve derived as a function of the hydrogen to uranium-235 atom concentration for an energy-independent extrapolation distance was successful in predicting the critical heights of various unreflected cylindrical assemblies. The critical dimensions of fully water-reflected cylindrical assemblies were reasonably well predicted using the theoretical buckling curve and reflector savings for equivalent spherical assemblies.

  14. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  15. Rapid detection of kernel rots and mycotoxins in maize by near-infrared reflectance spectroscopy.

    PubMed

    Berardo, Nicola; Pisacane, Vincenza; Battilani, Paola; Scandolara, Andrea; Pietri, Amedeo; Marocco, Adriano

    2005-10-19

    Near-infrared (NIR) spectroscopy is a practical spectroscopic procedure for the detection of organic compounds in matter. It is particularly useful because of its nondestructiveness, accuracy, rapid response, and easy operation. This work assesses the applicability of NIR for the rapid identification of micotoxigenic fungi and their toxic metabolites produced in naturally and artificially contaminated products. Two hundred and eighty maize samples were collected both from naturally contaminated maize crops grown in 16 areas in north-central Italy and from ears artificially inoculated with Fusarium verticillioides. All samples were analyzed for fungi infection, ergosterol, and fumonisin B1 content. The results obtained indicated that NIR could accurately predict the incidence of kernels infected by fungi, and by F. verticillioides in particular, as well as the quantity of ergosterol and fumonisin B1 in the meal. The statistics of the calibration and of the cross-validation for mold infection and for ergosterol and fumonisin B1 contents were significant. The best predictive ability for the percentage of global fungal infection and F. verticillioides was obtained using a calibration model utilizing maize kernels (r2 = 0.75 and SECV = 7.43) and maize meals (r2 = 0.79 and SECV = 10.95), respectively. This predictive performance was confirmed by the scatter plot of measured F. verticillioides infection versus NIR-predicted values in maize kernel samples (r2 = 0.80). The NIR methodology can be applied for monitoring mold contamination in postharvest maize, in particular F. verticilliodes and fumonisin presence, to distinguish contaminated lots from clean ones, and to avoid cross-contamination with other material during storage and may become a powerful tool for monitoring the safety of the food supply.

  16. Volume integral equation for electromagnetic scattering: Rigorous derivation and analysis for a set of multilayered particles with piecewise-smooth boundaries in a passive host medium

    NASA Astrophysics Data System (ADS)

    Yurkin, Maxim A.; Mishchenko, Michael I.

    2018-04-01

    We present a general derivation of the frequency-domain volume integral equation (VIE) for the electric field inside a nonmagnetic scattering object from the differential Maxwell equations, transmission boundary conditions, radiation condition at infinity, and locally-finite-energy condition. The derivation applies to an arbitrary spatially finite group of particles made of isotropic materials and embedded in a passive host medium, including those with edges, corners, and intersecting internal interfaces. This is a substantially more general type of scatterer than in all previous derivations. We explicitly treat the strong singularity of the integral kernel, but keep the entire discussion accessible to the applied scattering community. We also consider the known results on the existence and uniqueness of VIE solution and conjecture a general sufficient condition for that. Finally, we discuss an alternative way of deriving the VIE for an arbitrary object by means of a continuous transformation of the everywhere smooth refractive-index function into a discontinuous one. Overall, the paper examines and pushes forward the state-of-the-art understanding of various analytical aspects of the VIE.

  17. Resource Theory of Superposition

    NASA Astrophysics Data System (ADS)

    Theurer, T.; Killoran, N.; Egloff, D.; Plenio, M. B.

    2017-12-01

    The superposition principle lies at the heart of many nonclassical properties of quantum mechanics. Motivated by this, we introduce a rigorous resource theory framework for the quantification of superposition of a finite number of linear independent states. This theory is a generalization of resource theories of coherence. We determine the general structure of operations which do not create superposition, find a fundamental connection to unambiguous state discrimination, and propose several quantitative superposition measures. Using this theory, we show that trace decreasing operations can be completed for free which, when specialized to the theory of coherence, resolves an outstanding open question and is used to address the free probabilistic transformation between pure states. Finally, we prove that linearly independent superposition is a necessary and sufficient condition for the faithful creation of entanglement in discrete settings, establishing a strong structural connection between our theory of superposition and entanglement theory.

  18. Pollen flow in the wildservice tree, Sorbus torminalis (L.) Crantz. II. Pollen dispersal and heterogeneity in mating success inferred from parent-offspring analysis.

    PubMed

    Oddou-Muratorio, Sylvie; Klein, Etienne K; Austerlitz, Frédéric

    2005-12-01

    Knowing the extent of gene movements from parents to offspring is essential to understand the potential of a species to adapt rapidly to a changing environment, and to design appropriate conservation strategies. In this study, we develop a nonlinear statistical model to jointly estimate the pollen dispersal kernel and the heterogeneity in fecundity among phenotypically or environmentally defined groups of males. This model uses genotype data from a sample of fruiting plants, a sample of seeds harvested on each of these plants, and all males within a circumscribed area. We apply this model to a scattered, entomophilous woody species, Sorbus torminalis (L.) Crantz, within a natural population covering more than 470 ha. We estimate a high heterogeneity in male fecundity among ecological groups, both due to phenotype (size of the trees and flowering intensity) and landscape factors (stand density within the neighbourhood). We also show that fat-tailed kernels are the most appropriate to depict the important abilities of long-distance pollen dispersal for this species. Finally, our results reveal that the spatial position of a male with respect to females affects as much its mating success as ecological determinants of male fecundity. Our study thus stresses the interest to account for the dispersal kernel when estimating heterogeneity in male fecundity, and reciprocally.

  19. Huygens-Fresnel picture for electron-molecule elastic scattering★

    NASA Astrophysics Data System (ADS)

    Baltenkov, Arkadiy S.; Msezane, Alfred Z.

    2017-11-01

    The elastic scattering cross sections for a slow electron by C2 and H2 molecules have been calculated within the framework of the non-overlapping atomic potential model. For the amplitudes of the multiple electron scattering by a target the wave function of the molecular continuum is represented as a combination of a plane wave and two spherical waves generated by the centers of atomic spheres. This wave function obeys the Huygens-Fresnel principle according to which the electron wave scattering by a system of two centers is accompanied by generation of two spherical waves; their interaction creates a diffraction pattern far from the target. Each of the Huygens waves, in turn, is a superposition of the partial spherical waves with different orbital angular momenta l and their projections m. The amplitudes of these partial waves are defined by the corresponding phases of electron elastic scattering by an isolated atomic potential. In numerical calculations the s- and p-phase shifts are taken into account. So the number of interfering electron waves is equal to eight: two of which are the s-type waves and the remaining six waves are of the p-type with different m values. The calculation of the scattering amplitudes in closed form (rather than in the form of S-matrix expansion) is reduced to solving a system of eight inhomogeneous algebraic equations. The differential and total cross sections of electron scattering by fixed-in-space molecules and randomly oriented ones have been calculated as well. We conclude by discussing the special features of the S-matrix method for the case of arbitrary non-spherical potentials. Contribution to the Topical Issue "Low energy positron and electron interactions", edited by James Sullivan, Ron White, Michael Bromley, Ilya Fabrikant, and David Cassidy.

  20. A Unified Treatment of the Acoustic and Elastic Scattered Waves from Fluid-Elastic Media

    NASA Astrophysics Data System (ADS)

    Denis, Max Fernand

    In this thesis, contributions are made to the numerical modeling of the scattering fields from fluid-filled poroelastic materials. Of particular interest are highly porous materials that demonstrate strong contrast to the saturating fluid. A Biot's analysis of porous medium serves as the starting point of the elastic-solid and pore-fluid governing equations of motion. The longitudinal scattering waves of the elastic-solid mode and the pore-fluid mode are modeled by the Kirchhoff-Helmholtz integral equation. The integral equation is evaluated using a series approximation, describing the successive perturbation of the material contrasts. To extended the series' validity into larger domains, rational fraction extrapolation methods are employed. The local Pade□ approximant procedure is a technique that allows one to extrapolate from a scattered field of small contrast into larger values, using Pade□ approximants. To ensure the accuracy of the numerical model, comparisons are made with the exact solution of scattering from a fluid sphere. Mean absolute error analyses, yield convergent and accurate results. In addition, the numerical model correctly predicts the Bragg peaks for a periodic lattice of fluid spheres. In the case of trabecular bones, the far-field scattering pressure attenuation is a superposition of the elastic-solid mode and the pore-fluid mode generated waves from the surrounding fluid and poroelastic boundaries. The attenuation is linearly dependent with frequency between 0.2 and 0.6MHz. The slope of the attenuation is nonlinear with porosity, and does not reflect the mechanical properties of the trabecular bone. The attenuation shows the anisotropic effects of the trabeculae structure. Thus, ultrasound can possibly be employed to non-invasively predict the principal structural orientation of trabecular bones.

  1. Use of speckle for determining the response characteristics of Doppler imaging radars

    NASA Technical Reports Server (NTRS)

    Tilley, D. G.

    1986-01-01

    An optical model is developed for imaging optical radars such as the SAR on Seasat and the Shuttle Imaging Radar (SIR-B) by analyzing the Doppler shift of individual speckles in the image. The signal received at the spacecraft is treated in terms of a Fresnel-Kirchhoff integration over all backscattered radiation within a Huygen aperture at the earth. Account is taken of the movement of the spacecraft along the orbital path between emission and reception. The individual points are described by integration of the point source amplitude with a Green's function scattering kernel. Doppler data at each point furnishes the coordinates for visual representations. A Rayleigh-Poisson model of the surface scattering characteristics is used with Monte Carlo methods to generate simulations of Doppler radar speckle that compare well with Seasat SAR data SIR-B data.

  2. A geometry-based approach to determining time-temperature superposition shifts in aging experiments

    DOE PAGES

    Maiti, Amitesh

    2015-12-21

    A powerful way to expand the time and frequency range of material properties is through a method called time-temperature superposition (TTS). Traditionally, TTS has been applied to the dynamical mechanical and flow properties of thermo-rheologically simple materials, where a well-defined master curve can be objectively and accurately obtained by appropriate shifts of curves at different temperatures. However, TTS analysis can also be useful in many other situations where there is scatter in the data and where the principle holds only approximately. In such cases, shifting curves can become a subjective exercise and can often lead to significant errors in themore » long-term prediction. This mandates the need for an objective method of determining TTS shifts. Here, we adopt a method based on minimizing the “arc length” of the master curve, which is designed to work in situations where there is overlapping data at successive temperatures. We examine the accuracy of the method as a function of increasing noise in the data, and explore the effectiveness of data smoothing prior to TTS shifting. In conclusion, we validate the method using existing experimental data on the creep strain of an aramid fiber and the powder coarsening of an energetic material.« less

  3. Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space

    NASA Astrophysics Data System (ADS)

    Volkoff, T. J.; Whaley, K. B.

    2014-12-01

    We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.

  4. Proposed entanglement of X-ray nuclear polaritons as a potential method for probing matter at the subatomic scale.

    PubMed

    Liao, Wen-Te; Pálffy, Adriana

    2014-02-07

    A setup for generating the special superposition of a simultaneously forward- and backward-propagating collective excitation in a nuclear sample is studied. We show that by actively manipulating the scattering channels of single x-ray quanta with the help of a normal incidence x-ray mirror, a nuclear polariton which propagates in two opposite directions can be generated. The two counterpropagating polariton branches are entangled by a single x-ray photon. The quantum nature of the nuclear excitation entanglement gives rise to a subangstrom-wavelength standing wave excitation pattern that can be used as a flexible tool to probe matter dynamically on the subatomic scale.

  5. Radiometry rocks

    NASA Astrophysics Data System (ADS)

    Harvey, James E.

    2012-10-01

    Professor Bill Wolfe was an exceptional mentor for his graduate students, and he made a major contribution to the field of optical engineering by teaching the (largely ignored) principles of radiometry for over forty years. This paper describes an extension of Bill's work on surface scatter behavior and the application of the BRDF to practical optical engineering problems. Most currently-available image analysis codes require the BRDF data as input in order to calculate the image degradation from residual optical fabrication errors. This BRDF data is difficult to measure and rarely available for short EUV wavelengths of interest. Due to a smooth-surface approximation, the classical Rayleigh-Rice surface scatter theory cannot be used to calculate BRDFs from surface metrology data for even slightly rough surfaces. The classical Beckmann-Kirchhoff theory has a paraxial limitation and only provides a closed-form solution for Gaussian surfaces. Recognizing that surface scatter is a diffraction process, and by utilizing sound radiometric principles, we first developed a linear systems theory of non-paraxial scalar diffraction in which diffracted radiance is shift-invariant in direction cosine space. Since random rough surfaces are merely a superposition of sinusoidal phase gratings, it was a straightforward extension of this non-paraxial scalar diffraction theory to develop a unified surface scatter theory that is valid for moderately rough surfaces at arbitrary incident and scattered angles. Finally, the above two steps are combined to yield a linear systems approach to modeling image quality for systems suffering from a variety of image degradation mechanisms. A comparison of image quality predictions with experimental results taken from on-orbit Solar X-ray Imager (SXI) data is presented.

  6. Communication: Two measures of isochronal superposition

    NASA Astrophysics Data System (ADS)

    Roed, Lisa Anita; Gundermann, Ditte; Dyre, Jeppe C.; Niss, Kristine

    2013-09-01

    A liquid obeys isochronal superposition if its dynamics is invariant along the isochrones in the thermodynamic phase diagram (the curves of constant relaxation time). This paper introduces two quantitative measures of isochronal superposition. The measures are used to test the following six liquids for isochronal superposition: 1,2,6 hexanetriol, glycerol, polyphenyl ether, diethyl phthalate, tetramethyl tetraphenyl trisiloxane, and dibutyl phthalate. The latter four van der Waals liquids obey isochronal superposition to a higher degree than the two hydrogen-bonded liquids. This is a prediction of the isomorph theory, and it confirms findings by other groups.

  7. Communication: Two measures of isochronal superposition.

    PubMed

    Roed, Lisa Anita; Gundermann, Ditte; Dyre, Jeppe C; Niss, Kristine

    2013-09-14

    A liquid obeys isochronal superposition if its dynamics is invariant along the isochrones in the thermodynamic phase diagram (the curves of constant relaxation time). This paper introduces two quantitative measures of isochronal superposition. The measures are used to test the following six liquids for isochronal superposition: 1,2,6 hexanetriol, glycerol, polyphenyl ether, diethyl phthalate, tetramethyl tetraphenyl trisiloxane, and dibutyl phthalate. The latter four van der Waals liquids obey isochronal superposition to a higher degree than the two hydrogen-bonded liquids. This is a prediction of the isomorph theory, and it confirms findings by other groups.

  8. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures

    PubMed Central

    Theobald, Douglas L.; Wuttke, Deborah S.

    2008-01-01

    Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907

  9. Predicting surface scatter using a linear systems formulation of non-paraxial scalar diffraction

    NASA Astrophysics Data System (ADS)

    Krywonos, Andrey

    Scattering effects from rough surfaces are non-paraxial diffraction phenomena resulting from random phase variations in the reflected wavefront. The ability to predict these effects is important in a variety of applications including x-ray and EUV imaging, the design of stray light rejection systems, and reflection modeling for rendering realistic scenes and animations of physical objects in computer graphics. Rayleigh-Rice (small perturbation method) and Beckmann-Kirchoff (Kirchhoff approximation) theories are commonly used to predict surface scatter effects. In addition, Harvey and Shack developed a linear systems formulation of surface scatter phenomena in which the scattering behavior is characterized by a surface transfer function. This treatment provided insight and understanding not readily gleaned from the two previous theories, and has been incorporated into a variety of computer software packages (ASAP, Zemax, Tracepro). However, smooth surface and paraxial approximations have severely limited the range of applicability of each of the above theoretical treatments. In this dissertation, a linear systems formulation of non-paraxial scalar diffraction theory is first developed and then applied to sinusoidal phase gratings, resulting in diffraction efficiency predictions far more accurate than those provided by classical scalar theories. The application of the theory to these gratings was motivated by the fact that rough surfaces are frequently modeled as a superposition of sinusoidal surfaces of different amplitudes, periods, and orientations. The application of the non-paraxial scalar diffraction theory to surface scatter phenomena resulted first in a modified Beckmann-Kirchhoff surface scattering model, then a generalized Harvey-Shack theory, both of which produce accurate results for rougher surfaces than the Rayleigh-Rice theory and for larger incident and scattering angles than the classical Beckmann-Kirchhoff theory. These new developments enable the analysis and simplify the understanding of wide-angle scattering behavior from rough surfaces illuminated at large incident angles. In addition, they provide an improved BRDF (Bidirectional Reflectance Distribution Function) model, particularly for the smooth surface inverse scattering problem of determining surface power spectral density (PSD) curves from BRDF measurements.

  10. Spontaneous periodic ordering on the surface and in the bulk of dielectrics irradiated by ultrafast laser: a shared electromagnetic origin.

    PubMed

    Rudenko, Anton; Colombier, Jean-Philippe; Höhm, Sandra; Rosenfeld, Arkadi; Krüger, Jörg; Bonse, Jörn; Itina, Tatiana E

    2017-09-26

    Periodic self-organization of matter beyond the diffraction limit is a puzzling phenomenon, typical both for surface and bulk ultrashort laser processing. Here we compare the mechanisms of periodic nanostructure formation on the surface and in the bulk of fused silica. We show that volume nanogratings and surface nanoripples having subwavelength periodicity and oriented perpendicular to the laser polarization share the same electromagnetic origin. The nanostructure orientation is defined by the near-field local enhancement in the vicinity of the inhomogeneous scattering centers. The periodicity is attributed to the coherent superposition of the waves scattered at inhomogeneities. Numerical calculations also support the multipulse accumulation nature of nanogratings formation on the surface and inside fused silica. Laser surface processing by multiple laser pulses promotes the transition from the high spatial frequency perpendicularly oriented nanoripples to the low spatial frequency ripples, parallel or perpendicular to the laser polarization. The latter structures also share the electromagnetic origin, but are related to the incident field interference with the scattered far-field of rough non-metallic or transiently metallic surfaces. The characteristic ripple appearances are predicted by combined electromagnetic and thermo-mechanical approaches and supported by SEM images of the final surface morphology and by time-resolved pump-probe diffraction measurements.

  11. Surface-enhanced Raman scattering (SERS) of riboflavin on nanostructured Ag surfaces: The role of excitation wavelength, plasmon resonance and molecular resonance.

    PubMed

    Šubr, Martin; Kuzminova, Anna; Kylián, Ondřej; Procházka, Marek

    2018-05-15

    Optimization of surface-enhanced Raman scattering (SERS)-based sensors for (bio)analytical applications has received much attention in recent years. For optimum sensitivity, both the nanostructure fabrication process and the choice of the excitation wavelength used with respect to the specific analyte studied are of crucial importance. In this contribution, detailed SERS intensity profiles were measured using gradient nanostructures with the localized surface-plasmon resonance (LSPR) condition varying across the sample length and using riboflavin as the model biomolecule. Three different excitation wavelengths (633 nm, 515 nm and 488 nm) corresponding to non-resonance, pre-resonance and resonance excitation with respect to the studied molecule, respectively, were tested. Results were interpreted in terms of a superposition of the enhancement provided by the electromagnetic mechanism and intrinsic properties of the SERS probe molecule. The first effect was dictated mainly by the degree of spectral overlap between the LSPR band, the excitation wavelength along with the scattering cross-section of the nanostructures, while the latter was influenced by the position of the molecular resonance with respect to the excitation wavelength. Our experimental findings contribute to a better understanding of the SERS enhancement mechanism. Copyright © 2018. Published by Elsevier B.V.

  12. Quasi-four-particle first-order Faddeev-Watson-Lovelace terms in proton-helium scattering

    NASA Astrophysics Data System (ADS)

    Safarzade, Zohre; Akbarabadi, Farideh Shojaei; Fathi, Reza; Brunger, Michael J.; Bolorizadeh, Mohammad A.

    2017-06-01

    The Faddeev-Watson-Lovelace equations, which are typically used for solving three-particle scattering problems, are based on the assumption of target having one active electron while the other electrons remain passive during the collision process. So, in the case of protons scattering from helium or helium-like targets, in which there are two bound-state electrons, the passive electron has a static role in the collision channel to be studied. In this work, we intend to assign a dynamic role to all the target electrons, as they are physically active in the collision. By including an active role for the second electron in proton-helium-like collisions, a new form of the Faddeev-Watson-Lovelace integral equations is needed, in which there is no disconnected kernel. We consider the operators and the wave functions associated with the electrons to obey the Pauli exclusion principle, as the electrons are indistinguishable. In addition, a quasi-three-particle collision is assumed in the initial channel, where the electronic cloud is represented as a single identity in the collision.

  13. A Modified Monte Carlo Method for Carrier Transport in Germanium, Free of Isotropic Rates

    NASA Astrophysics Data System (ADS)

    Sundqvist, Kyle

    2010-03-01

    We present a new method for carrier transport simulation, relevant for high-purity germanium < 100 > at a temperature of 40 mK. In this system, the scattering of electrons and holes is dominated by spontaneous phonon emission. Free carriers are always out of equilibrium with the lattice. We must also properly account for directional effects due to band structure, but there are many cautions in the literature about treating germanium in particular. These objections arise because the germanium electron system is anisotropic to an extreme degree, while standard Monte Carlo algorithms maintain a reliance on isotropic, integrated rates. We re-examine Fermi's Golden Rule to produce a Monte Carlo method free of isotropic rates. Traditional Monte Carlo codes implement particle scattering based on an isotropically averaged rate, followed by a separate selection of the particle's final state via a momentum-dependent probability. In our method, the kernel of Fermi's Golden Rule produces analytical, bivariate rates which allow for the simultaneous choice of scatter and final state selection. Energy and momentum are automatically conserved. We compare our results to experimental data.

  14. A boundary integral equation method using auxiliary interior surface approach for acoustic radiation and scattering in two dimensions.

    PubMed

    Yang, S A

    2002-10-01

    This paper presents an effective solution method for predicting acoustic radiation and scattering fields in two dimensions. The difficulty of the fictitious characteristic frequency is overcome by incorporating an auxiliary interior surface that satisfies certain boundary condition into the body surface. This process gives rise to a set of uniquely solvable boundary integral equations. Distributing monopoles with unknown strengths over the body and interior surfaces yields the simple source formulation. The modified boundary integral equations are further transformed to ordinary ones that contain nonsingular kernels only. This implementation allows direct application of standard quadrature formulas over the entire integration domain; that is, the collocation points are exactly the positions at which the integration points are located. Selecting the interior surface is an easy task. Moreover, only a few corresponding interior nodal points are sufficient for the computation. Numerical calculations consist of the acoustic radiation and scattering by acoustically hard elliptic and rectangular cylinders. Comparisons with analytical solutions are made. Numerical results demonstrate the efficiency and accuracy of the current solution method.

  15. Roy-Steiner equations for pion-nucleon scattering

    NASA Astrophysics Data System (ADS)

    Ditsche, C.; Hoferichter, M.; Kubis, B.; Meißner, U.-G.

    2012-06-01

    Starting from hyperbolic dispersion relations, we derive a closed system of Roy-Steiner equations for pion-nucleon scattering that respects analyticity, unitarity, and crossing symmetry. We work out analytically all kernel functions and unitarity relations required for the lowest partial waves. In order to suppress the dependence on the high energy regime we also consider once- and twice-subtracted versions of the equations, where we identify the subtraction constants with subthreshold parameters. Assuming Mandelstam analyticity we determine the maximal range of validity of these equations. As a first step towards the solution of the full system we cast the equations for the π π to overline N N partial waves into the form of a Muskhelishvili-Omnès problem with finite matching point, which we solve numerically in the single-channel approximation. We investigate in detail the role of individual contributions to our solutions and discuss some consequences for the spectral functions of the nucleon electromagnetic form factors.

  16. Quantum crystallography: A perspective.

    PubMed

    Massa, Lou; Matta, Chérif F

    2018-06-30

    Extraction of the complete quantum mechanics from X-ray scattering data is the ultimate goal of quantum crystallography. This article delivers a perspective for that possibility. It is desirable to have a method for the conversion of X-ray diffraction data into an electron density that reflects the antisymmetry of an N-electron wave function. A formalism for this was developed early on for the determination of a constrained idempotent one-body density matrix. The formalism ensures pure-state N-representability in the single determinant sense. Applications to crystals show that quantum mechanical density matrices of large molecules can be extracted from X-ray scattering data by implementing a fragmentation method termed the kernel energy method (KEM). It is shown how KEM can be used within the context of quantum crystallography to derive quantum mechanical properties of biological molecules (with low data-to-parameters ratio). © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikeda, Y.; Sato, T.

    Three-body resonances in the KNN system have been studied within a framework of the KNN-{pi}YN coupled-channel Faddeev equation. By solving the three-body equation, the energy dependence of the resonant KN amplitude is fully taken into account. The S-matrix pole has been investigated from the eigenvalue of the kernel with the analytic continuation of the scattering amplitude on the unphysical Riemann sheet. The KN interaction is constructed from the leading order term of the chiral Lagrangian using relativistic kinematics. The {lambda}(1405) resonance is dynamically generated in this model, where the KN interaction parameters are fitted to the data of scattering length.more » As a result we find a three-body resonance of the strange dibaryon system with binding energy B{approx}79 MeV and width {gamma}{approx}74 MeV. The energy of the three-body resonance is found to be sensitive to the model of the I=0 KN interaction.« less

  18. Homogeneous partial differential equations for superpositions of indeterminate functions of several variables

    NASA Astrophysics Data System (ADS)

    Asai, Kazuto

    2009-02-01

    We determine essentially all partial differential equations satisfied by superpositions of tree type and of a further special type. These equations represent necessary and sufficient conditions for an analytic function to be locally expressible as an analytic superposition of the type indicated. The representability of a real analytic function by a superposition of this type is independent of whether that superposition involves real-analytic functions or C^{\\rho}-functions, where the constant \\rho is determined by the structure of the superposition. We also prove that the function u defined by u^n=xu^a+yu^b+zu^c+1 is generally non-representable in any real (resp. complex) domain as f\\bigl(g(x,y),h(y,z)\\bigr) with twice differentiable f and differentiable g, h (resp. analytic f, g, h).

  19. Optimal simultaneous superpositioning of multiple structures with missing data.

    PubMed

    Theobald, Douglas L; Steindel, Phillip A

    2012-08-01

    Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.

  20. SU-F-T-672: A Novel Kernel-Based Dose Engine for KeV Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reinhart, M; Fast, M F; Nill, S

    2016-06-15

    Purpose: Mimicking state-of-the-art patient radiotherapy with high precision irradiators for small animals allows advanced dose-effect studies and radiobiological investigations. One example is the implementation of pre-clinical IMRT-like irradiations, which requires the development of inverse planning for keV photon beams. As a first step, we present a novel kernel-based dose calculation engine for keV x-rays with explicit consideration of energy and material dependencies. Methods: We follow a superposition-convolution approach adapted to keV x-rays, based on previously published work on micro-beam therapy. In small animal radiotherapy, we assume local energy deposition at the photon interaction point, since the electron ranges in tissuemore » are of the same order of magnitude as the voxel size. This allows us to use photon-only kernel sets generated by MC simulations, which are pre-calculated for six energy windows and ten base materials. We validate our stand-alone dose engine against Geant4 MC simulations for various beam configurations in water, slab phantoms with bone and lung inserts, and on a mouse CT with (0.275mm)3 voxels. Results: We observe good agreement for all cases. For field sizes of 1mm{sup 2} to 1cm{sup 2} in water, the depth dose curves agree within 1% (mean), with the largest deviations in the first voxel (4%) and at depths>5cm (<2.5%). The out-of-field doses at 1cm depth agree within 8% (mean) for all but the smallest field size. In slab geometries, the mean agreement was within 3%, with maximum deviations of 8% at water-bone interfaces. The γ-index (1mm/1%) passing rate for a single-field mouse irradiation is 71%. Conclusion: The presented dose engine yields an accurate representation of keV-photon doses suitable for inverse treatment planning for IMRT. It has the potential to become a significantly faster yet sufficiently accurate alternative to full MC simulations. Further investigations will focus on energy sampling as well as calculation times. Research at ICR is also supported by Cancer Research UK under Programme C33589/A19727 and NHS funding to the NIHR Biomedical Research Centre at RMH and ICR. MFF is supported by Cancer Research UK under Programme C33589/A19908.« less

  1. Rapid automated superposition of shapes and macromolecular models using spherical harmonics.

    PubMed

    Konarev, Petr V; Petoukhov, Maxim V; Svergun, Dmitri I

    2016-06-01

    A rapid algorithm to superimpose macromolecular models in Fourier space is proposed and implemented ( SUPALM ). The method uses a normalized integrated cross-term of the scattering amplitudes as a proximity measure between two three-dimensional objects. The reciprocal-space algorithm allows for direct matching of heterogeneous objects including high- and low-resolution models represented by atomic coordinates, beads or dummy residue chains as well as electron microscopy density maps and inhomogeneous multi-phase models ( e.g. of protein-nucleic acid complexes). Using spherical harmonics for the computation of the amplitudes, the method is up to an order of magnitude faster than the real-space algorithm implemented in SUPCOMB by Kozin & Svergun [ J. Appl. Cryst. (2001 ▸), 34 , 33-41]. The utility of the new method is demonstrated in a number of test cases and compared with the results of SUPCOMB . The spherical harmonics algorithm is best suited for low-resolution shape models, e.g . those provided by solution scattering experiments, but also facilitates a rapid cross-validation against structural models obtained by other methods.

  2. Total Ambient Dose Equivalent Buildup Factor Determination for Nbs04 Concrete.

    PubMed

    Duckic, Paulina; Hayes, Robert B

    2018-06-01

    Buildup factors are dimensionless multiplicative factors required by the point kernel method to account for scattered radiation through a shielding material. The accuracy of the point kernel method is strongly affected by the correspondence of analyzed parameters to experimental configurations, which is attempted to be simplified here. The point kernel method has not been found to have widespread practical use for neutron shielding calculations due to the complex neutron transport behavior through shielding materials (i.e. the variety of interaction mechanisms that neutrons may undergo while traversing the shield) as well as non-linear neutron total cross section energy dependence. In this work, total ambient dose buildup factors for NBS04 concrete are calculated in terms of neutron and secondary gamma ray transmission factors. The neutron and secondary gamma ray transmission factors are calculated using MCNP6™ code with updated cross sections. Both transmission factors and buildup factors are given in a tabulated form. Practical use of neutron transmission and buildup factors warrants rigorously calculated results with all associated uncertainties. In this work, sensitivity analysis of neutron transmission factors and total buildup factors with varying water content has been conducted. The analysis showed significant impact of varying water content in concrete on both neutron transmission factors and total buildup factors. Finally, support vector regression, a machine learning technique, has been engaged to make a model based on the calculated data for calculation of the buildup factors. The developed model can predict most of the data with 20% relative error.

  3. Noninvasive prostate cancer screening based on serum surface-enhanced Raman spectroscopy and support vector machine

    NASA Astrophysics Data System (ADS)

    Li, Shaoxin; Zhang, Yanjiao; Xu, Junfa; Li, Linfang; Zeng, Qiuyao; Lin, Lin; Guo, Zhouyi; Liu, Zhiming; Xiong, Honglian; Liu, Songhao

    2014-09-01

    This study aims to present a noninvasive prostate cancer screening methods using serum surface-enhanced Raman scattering (SERS) and support vector machine (SVM) techniques through peripheral blood sample. SERS measurements are performed using serum samples from 93 prostate cancer patients and 68 healthy volunteers by silver nanoparticles. Three types of kernel functions including linear, polynomial, and Gaussian radial basis function (RBF) are employed to build SVM diagnostic models for classifying measured SERS spectra. For comparably evaluating the performance of SVM classification models, the standard multivariate statistic analysis method of principal component analysis (PCA) is also applied to classify the same datasets. The study results show that for the RBF kernel SVM diagnostic model, the diagnostic accuracy of 98.1% is acquired, which is superior to the results of 91.3% obtained from PCA methods. The receiver operating characteristic curve of diagnostic models further confirm above research results. This study demonstrates that label-free serum SERS analysis technique combined with SVM diagnostic algorithm has great potential for noninvasive prostate cancer screening.

  4. High-energy photon-hadron scattering in holographic QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishio, Ryoichi; Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwano-ha 5-1-5, 277-8583; Watari, Taizan

    2011-10-01

    This article provides an in-depth look at hadron high-energy scattering by using gravity dual descriptions of strongly coupled gauge theories. Just like deeply inelastic scattering (DIS) and deeply virtual Compton scattering (DVCS) serve as clean experimental probes into nonperturbative internal structure of hadrons, elastic scattering amplitude of a hadron and a (virtual) photon in gravity dual can be exploited as a theoretical probe. Since the scattering amplitude at sufficiently high energy (small Bjorken x) is dominated by parton contributions (=Pomeron contributions) even in strong coupling regime, there is a chance to learn a lesson for generalized parton distribution (GPD) bymore » using gravity dual models. We begin with refining derivation of the Brower-Polchinski-Strassler-Tan (BPST) Pomeron kernel in gravity dual, paying particular attention to the role played by the complex spin variable j. The BPST Pomeron on warped spacetime consists of a Kaluza-Klein tower of 4D Pomerons with nonlinear trajectories, and we clarify the relation between Pomeron couplings and the Pomeron form factor. We emphasize that the saddle-point value j* of the scattering amplitude in the complex j-plane representation is a very important concept in understanding qualitative behavior of the scattering amplitude. The total Pomeron contribution to the scattering is decomposed into the saddle-point contribution and at most a finite number of pole contributions, and when the pole contributions are absent (which we call saddle-point phase), kinematical variable (q,x,t)-dependence of ln(1/q) evolution and ln(1/x) evolution parameters {gamma}{sub eff} and {lambda}{sub eff} in DIS and t-slope parameter B of DVCS in HERA experiment are all reproduced qualitatively in gravity dual. All of these observations shed a new light on modeling of GPD. Straightforward application of those results to other hadron high-energy scattering is also discussed.« less

  5. SU-E-T-378: Evaluation of An Analytical Model for the Inter-Seed Attenuation Effect in 103-Pd Multi-Seed Implant Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safigholi, H; Soliman, A; Song, W

    Purpose: Brachytherapy treatment planning systems based on TG-43 protocol calculate the dose in water and neglects the heterogeneity effect of seeds in multi-seed implant brachytherapy. In this research, the accuracy of a novel analytical model that we propose for the inter-seed attenuation effect (ISA) for 103-Pd seed model is evaluated. Methods: In the analytical model, dose perturbation due to the ISA effect for each seed in an LDR multi-seed implant for 103-Pd is calculated by assuming that the seed of interest is active and the other surrounding seeds are inactive. The cumulative dosimetric effect of all seeds is then summedmore » using the superposition principle. The model is based on pre Monte Carlo (MC) simulated 3D kernels of the dose perturbations caused by the ISA effect. The cumulative ISA effect due to multiple surrounding seeds is obtained by a simple multiplication of the individual ISA effect by each seed, the effect of which is determined by the distance from the seed of interest. This novel algorithm is then compared with full MC water-based simulations (FMCW). Results: The results show that the dose perturbation model we propose is in excellent agreement with the FMCW values for a case with three seeds separated by 1 cm. The average difference of the model and the FMCW simulations was less than 8%±2%. Conclusion: Using the proposed novel analytical ISA effect model, one could expedite the corrections due to the ISA dose perturbation effects during permanent seed 103-Pd brachytherapy planning with minimal increase in time since the model is based on multiplications and superposition. This model can be applied, in principle, to any other brachytherapy seeds. Further work is necessary to validate this model on a more complicated geometry as well.« less

  6. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    PubMed

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  7. Scrambled coherent superposition for enhanced optical fiber communication in the nonlinear transmission regime.

    PubMed

    Liu, Xiang; Chandrasekhar, S; Winzer, P J; Chraplyvy, A R; Tkach, R W; Zhu, B; Taunay, T F; Fishteyn, M; DiGiovanni, D J

    2012-08-13

    Coherent superposition of light waves has long been used in various fields of science, and recent advances in digital coherent detection and space-division multiplexing have enabled the coherent superposition of information-carrying optical signals to achieve better communication fidelity on amplified-spontaneous-noise limited communication links. However, fiber nonlinearity introduces highly correlated distortions on identical signals and diminishes the benefit of coherent superposition in nonlinear transmission regime. Here we experimentally demonstrate that through coordinated scrambling of signal constellations at the transmitter, together with appropriate unscrambling at the receiver, the full benefit of coherent superposition is retained in the nonlinear transmission regime of a space-diversity fiber link based on an innovatively engineered multi-core fiber. This scrambled coherent superposition may provide the flexibility of trading communication capacity for performance in future optical fiber networks, and may open new possibilities in high-performance and secure optical communications.

  8. Semiclassical theory of electronically nonadiabatic transitions in molecular collision processes

    NASA Technical Reports Server (NTRS)

    Lam, K. S.; George, T. F.

    1979-01-01

    An introductory account of the semiclassical theory of the S-matrix for molecular collision processes is presented, with special emphasis on electronically nonadiabatic transitions. This theory is based on the incorporation of classical mechanics with quantum superposition, and in practice makes use of the analytic continuation of classical mechanics into the complex space of time domain. The relevant concepts of molecular scattering theory and related dynamical models are described and the formalism is developed and illustrated with simple examples - collinear collision of the A+BC type. The theory is then extended to include the effects of laser-induced nonadiabatic transitions. Two bound continuum processes collisional ionization and collision-induced emission also amenable to the same general semiclassical treatment are discussed.

  9. Impact of absorptivity and wavelength on the optical properties of aggregates with sintering necks

    NASA Astrophysics Data System (ADS)

    Bao, Yujia; Huang, Yong; He, Beichen

    2018-04-01

    In this paper, we constructed sintered aggregates based on the particle superposition model and apply the ball-necking factor η to characterize the sintering degree. The impact of the absorptivity characterized by the complex refractive index m and the wavelength of the incident light λ on the optical properties of aggregates with different η were compared and investigated. The results indicate that for different m and λ, the light scattering characteristics exhibit regular changes in the values, the peak locations and the size trends. Further, the deviation of 1 - S22/S11 caused by various η is noteworthy and considerable so that it can be used as a probe sensor parameter in the detection of the sintered aggregates configuration.

  10. Simulation of complex magnesium alloy texture using the axial component fit method with central normal distributions

    NASA Astrophysics Data System (ADS)

    Ivanova, T. M.; Serebryany, V. N.

    2017-12-01

    The component fit method in quantitative texture analysis assumes that the texture of the polycrystalline sample can be represented by a superposition of weighted standard distributions those are characterized by position in the orientation space, shape and sharpness of the scattering. The components of the peak and axial shapes are usually used. It is known that an axial texture develops in materials subjected to direct pressing. In this paper we considered the possibility of modelling a texture of a magnesium sample subjected to equal-channel angular pressing with axial components only. The results obtained make it possible to conclude that ECAP is also a process leading to the appearance of an axial texture in magnesium alloys.

  11. Invited article: Broadband highly-efficient dielectric metadevices for polarization control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruk, Sergey; Hopkins, Ben; Kravchenko, Ivan I.

    Metadevices based on dielectric nanostructured surfaces with both electric and magnetic Mie-type resonances have resulted in the best efficiency to date for functional flat optics with only one disadvantage: a narrow operational bandwidth. Here we experimentally demonstrate broadband transparent all-dielectric metasurfaces for highly efficient polarization manipulation. We utilize the generalized Huygens principle, with a superposition of the scattering contributions from several electric and magnetic multipolar modes of the constituent meta-atoms, to achieve destructive interference in reflection over a large spectral bandwidth. Furthermore, by employing this novel concept, we demonstrate reflectionless (~90% transmission) half-wave plates, quarter-wave plates, and vector beam q-platesmore » that can operate across multiple telecom bands with ~99% polarization conversion efficiency.« less

  12. Invited article: Broadband highly-efficient dielectric metadevices for polarization control

    DOE PAGES

    Kruk, Sergey; Hopkins, Ben; Kravchenko, Ivan I.; ...

    2016-06-06

    Metadevices based on dielectric nanostructured surfaces with both electric and magnetic Mie-type resonances have resulted in the best efficiency to date for functional flat optics with only one disadvantage: a narrow operational bandwidth. Here we experimentally demonstrate broadband transparent all-dielectric metasurfaces for highly efficient polarization manipulation. We utilize the generalized Huygens principle, with a superposition of the scattering contributions from several electric and magnetic multipolar modes of the constituent meta-atoms, to achieve destructive interference in reflection over a large spectral bandwidth. Furthermore, by employing this novel concept, we demonstrate reflectionless (~90% transmission) half-wave plates, quarter-wave plates, and vector beam q-platesmore » that can operate across multiple telecom bands with ~99% polarization conversion efficiency.« less

  13. Complementary Speckle Patterns: Deterministic Interchange of Intrinsic Vortices and Maxima through Scattering Media.

    PubMed

    Gateau, Jérôme; Rigneault, Hervé; Guillon, Marc

    2017-01-27

    Intensity maxima and zeros of speckle patterns obtained behind a diffuser are experimentally interchanged by applying a spiral phase delay of charge ±1 to the impinging coherent beam. This transform arises from the expectation that tightly focused beams, which have a planar wave front around the focus, are so changed into vortex beams and vice versa. The statistics of extrema locations and the intensity distribution of the so-generated "complementary" patterns are characterized by numerical simulations. It is demonstrated experimentally that the incoherent superposition of the three "complementary speckle patterns" yield a synthetic speckle grain size enlarged by a factor of sqrt[3]. A cyclic permutation of optical vortices and intensity maxima is unexpectedly observed and discussed.

  14. Comparison of modal superposition methods for the analytical solution to moving load problems.

    DOT National Transportation Integrated Search

    1994-01-01

    The response of bridge structures to moving loads is investigated using modal superposition methods. Two distinct modal superposition methods are available: the modedisplacement method and the mode-acceleration method. While the mode-displacement met...

  15. The origin of non-classical effects in a one-dimensional superposition of coherent states

    NASA Technical Reports Server (NTRS)

    Buzek, V.; Knight, P. L.; Barranco, A. Vidiella

    1992-01-01

    We investigate the nature of the quantum fluctuations in a light field created by the superposition of coherent fields. We give a physical explanation (in terms of Wigner functions and phase-space interference) why the 1-D superposition of coherent states in the direction of the x-quadrature leads to the squeezing of fluctuations in the y-direction, and show that such a superposition can generate the squeezed vacuum and squeezed coherent states.

  16. Optimal simultaneous superpositioning of multiple structures with missing data

    PubMed Central

    Theobald, Douglas L.; Steindel, Phillip A.

    2012-01-01

    Motivation: Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually ‘missing’ from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Results: Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation–maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. Availability and implementation: The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. Contact: dtheobald@brandeis.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22543369

  17. LANDSAT-D investigations in snow hydrology

    NASA Technical Reports Server (NTRS)

    Dozier, J. (Principal Investigator)

    1984-01-01

    Two stream methods provide rapid approximate calculations of radiative transfer in scattering and absorbing media. Although they provide information on fluxes only, and not on intensities, their speed makes them attractive to more precise methods. The methods provide a comprehensive, unified review for a homogeneous layer, and solve the equations for reflectance and transmittance for a homogeneous layer over a non reflecting surface. Any of the basic kernels for a single layer can be extended to a vertically inhomogeneous medium over a surface whose reflectance properties vary with illumination angle, as long as the medium can be subdivided into homogeneous layers.

  18. Implementation of radiation shielding calculation methods. Volume 1: Synopsis of methods and summary of results

    NASA Technical Reports Server (NTRS)

    Capo, M. A.; Disney, R. K.

    1971-01-01

    The work performed in the following areas is summarized: (1) Analysis of Realistic nuclear-propelled vehicle was analyzed using the Marshall Space Flight Center computer code package. This code package includes one and two dimensional discrete ordinate transport, point kernel, and single scatter techniques, as well as cross section preparation and data processing codes, (2) Techniques were developed to improve the automated data transfer in the coupled computation method of the computer code package and improve the utilization of this code package on the Univac-1108 computer system. (3) The MSFC master data libraries were updated.

  19. Integrated ray tracing simulation of annual variation of spectral bio-signatures from cloud free 3D optical Earth model

    NASA Astrophysics Data System (ADS)

    Ryu, Dongok; Kim, Sug-Whan; Kim, Dae Wook; Lee, Jae-Min; Lee, Hanshin; Park, Won Hyun; Seong, Sehyun; Ham, Sun-Jeong

    2010-09-01

    Understanding the Earth spectral bio-signatures provides an important reference datum for accurate de-convolution of collapsed spectral signals from potential earth-like planets of other star systems. This study presents a new ray tracing computation method including an improved 3D optical earth model constructed with the coastal line and vegetation distribution data from the Global Ecological Zone (GEZ) map. Using non-Lambertian bidirectional scattering distribution function (BSDF) models, the input earth surface model is characterized with three different scattering properties and their annual variations depending on monthly changes in vegetation distribution, sea ice coverage and illumination angle. The input atmosphere model consists of one layer with Rayleigh scattering model from the sea level to 100 km in altitude and its radiative transfer characteristics is computed for four seasons using the SMART codes. The ocean scattering model is a combination of sun-glint scattering and Lambertian scattering models. The land surface scattering is defined with the semi empirical parametric kernel method used for MODIS and POLDER missions. These three component models were integrated into the final Earth model that was then incorporated into the in-house built integrated ray tracing (IRT) model capable of computing both spectral imaging and radiative transfer performance of a hypothetical space instrument as it observes the Earth from its designated orbit. The IRT model simulation inputs include variation in earth orientation, illuminated phases, and seasonal sea ice and vegetation distribution. The trial simulation runs result in the annual variations in phase dependent disk averaged spectra (DAS) and its associated bio-signatures such as NDVI. The full computational details are presented together with the resulting annual variation in DAS and its associated bio-signatures.

  20. Magnetic field influences on the lateral dose response functions of photon-beam detectors: MC study of wall-less water-filled detectors with various densities.

    PubMed

    Looe, Hui Khee; Delfs, Björn; Poppinga, Daniela; Harder, Dietrich; Poppe, Björn

    2017-06-21

    The distortion of detector reading profiles across photon beams in the presence of magnetic fields is a developing subject of clinical photon-beam dosimetry. The underlying modification by the Lorentz force of a detector's lateral dose response function-the convolution kernel transforming the true cross-beam dose profile in water into the detector reading profile-is here studied for the first time. The three basic convolution kernels, the photon fluence response function, the dose deposition kernel, and the lateral dose response function, of wall-less cylindrical detectors filled with water of low, normal and enhanced density are shown by Monte Carlo simulation to be distorted in the prevailing direction of the Lorentz force. The asymmetric shape changes of these convolution kernels in a water medium and in magnetic fields of up to 1.5 T are confined to the lower millimetre range, and they depend on the photon beam quality, the magnetic flux density and the detector's density. The impact of this distortion on detector reading profiles is demonstrated using a narrow photon beam profile. For clinical applications it appears as favourable that the magnetic flux density dependent distortion of the lateral dose response function, as far as secondary electron transport is concerned, vanishes in the case of water-equivalent detectors of normal water density. By means of secondary electron history backtracing, the spatial distribution of the photon interactions giving rise either directly to secondary electrons or to scattered photons further downstream producing secondary electrons which contribute to the detector's signal, and their lateral shift due to the Lorentz force is elucidated. Electron history backtracing also serves to illustrate the correct treatment of the influences of the Lorentz force in the EGSnrc Monte Carlo code applied in this study.

  1. Evaluation of Class II treatment by cephalometric regional superpositions versus conventional measurements.

    PubMed

    Efstratiadis, Stella; Baumrind, Sheldon; Shofer, Frances; Jacobsson-Hunt, Ulla; Laster, Larry; Ghafari, Joseph

    2005-11-01

    The aims of this study were (1) to evaluate cephalometric changes in subjects with Class II Division 1 malocclusion who were treated with headgear (HG) or Fränkel function regulator (FR) and (2) to compare findings from regional superpositions of cephalometric structures with those from conventional cephalometric measurements. Cephalographs were taken at baseline, after 1 year, and after 2 years of 65 children enrolled in a prospective randomized clinical trial. The spatial location of the landmarks derived from regional superpositions was evaluated in a coordinate system oriented on natural head position. The superpositions included the best anatomic fit of the anterior cranial base, maxillary base, and mandibular structures. Both the HG and the FR were effective in correcting the distoclusion, and they generated enhanced differential growth between the jaws. Differences between cranial and maxillary superpositions regarding mandibular displacement (Point B, pogonion, gnathion, menton) were noted: the HG had a more horizontal vector on maxillary superposition that was also greater (.0001 < P < .05) than the horizontal displacement observed with the FR. This discrepancy appeared to be related to (1) the clockwise (backward) rotation of the palatal and mandibular planes observed with the HG; the palatal plane's rotation, which was transferred through the occlusion to the mandibular plane, was factored out on maxillary superposition; and (2) the interaction between the inclination of the maxillary incisors and the forward movement of the mandible during growth. Findings from superpositions agreed with conventional angular and linear measurements regarding the basic conclusions for the primary effects of HG and FR. However, the results suggest that inferences of mandibular displacement are more reliable from maxillary than cranial superposition when evaluating occlusal changes during treatment.

  2. fast_protein_cluster: parallel and optimized clustering of large-scale protein modeling data.

    PubMed

    Hung, Ling-Hong; Samudrala, Ram

    2014-06-15

    fast_protein_cluster is a fast, parallel and memory efficient package used to cluster 60 000 sets of protein models (with up to 550 000 models per set) generated by the Nutritious Rice for the World project. fast_protein_cluster is an optimized and extensible toolkit that supports Root Mean Square Deviation after optimal superposition (RMSD) and Template Modeling score (TM-score) as metrics. RMSD calculations using a laptop CPU are 60× faster than qcprot and 3× faster than current graphics processing unit (GPU) implementations. New GPU code further increases the speed of RMSD and TM-score calculations. fast_protein_cluster provides novel k-means and hierarchical clustering methods that are up to 250× and 2000× faster, respectively, than Clusco, and identify significantly more accurate models than Spicker and Clusco. fast_protein_cluster is written in C++ using OpenMP for multi-threading support. Custom streaming Single Instruction Multiple Data (SIMD) extensions and advanced vector extension intrinsics code accelerate CPU calculations, and OpenCL kernels support AMD and Nvidia GPUs. fast_protein_cluster is available under the M.I.T. license. (http://software.compbio.washington.edu/fast_protein_cluster) © The Author 2014. Published by Oxford University Press.

  3. New evaluation of thermal neutron scattering libraries for light and heavy water

    NASA Astrophysics Data System (ADS)

    Marquez Damian, Jose Ignacio; Granada, Jose Rolando; Cantargi, Florencia; Roubtsov, Danila

    2017-09-01

    In order to improve the design and safety of thermal nuclear reactors and for verification of criticality safety conditions on systems with significant amount of fissile materials and water, it is necessary to perform high-precision neutron transport calculations and estimate uncertainties of the results. These calculations are based on neutron interaction data distributed in evaluated nuclear data libraries. To improve the evaluations of thermal scattering sub-libraries, we developed a set of thermal neutron scattering cross sections (scattering kernels) for hydrogen bound in light water, and deuterium and oxygen bound in heavy water, in the ENDF-6 format from room temperature up to the critical temperatures of molecular liquids. The new evaluations were generated and processable with NJOY99 and also with NJOY-2012 with minor modifications (updates), and with the new version of NJOY-2016. The new TSL libraries are based on molecular dynamics simulations with GROMACS and recent experimental data, and result in an improvement of the calculation of single neutron scattering quantities. In this work, we discuss the importance of taking into account self-diffusion in liquids to accurately describe the neutron scattering at low neutron energies (quasi-elastic peak problem). To improve modeling of heavy water, it is important to take into account temperature-dependent static structure factors and apply Sköld approximation to the coherent inelastic components of the scattering matrix. The usage of the new set of scattering matrices and cross-sections improves the calculation of thermal critical systems moderated and/or reflected with light/heavy water obtained from the International Criticality Safety Benchmark Evaluation Project (ICSBEP) handbook. For example, the use of the new thermal scattering library for heavy water, combined with the ROSFOND-2010 evaluation of the cross sections for deuterium, results in an improvement of the C/E ratio in 48 out of 65 international benchmark cases calculated with the Monte Carlo code MCNP5, in comparison with the existing library based on the ENDF/B-VII.0 evaluation.

  4. Applicability of the Effective-Medium Approximation to Heterogeneous Aerosol Particles.

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Dlugach, Janna M.; Liu, Li

    2016-01-01

    The effective-medium approximation (EMA) is based on the assumption that a heterogeneous particle can have a homogeneous counterpart possessing similar scattering and absorption properties. We analyze the numerical accuracy of the EMA by comparing superposition T-matrix computations for spherical aerosol particles filled with numerous randomly distributed small inclusions and Lorenz-Mie computations based on the Maxwell-Garnett mixing rule. We verify numerically that the EMA can indeed be realized for inclusion size parameters smaller than a threshold value. The threshold size parameter depends on the refractive-index contrast between the host and inclusion materials and quite often does not exceed several tenths, especially in calculations of the scattering matrix and the absorption cross section. As the inclusion size parameter approaches the threshold value, the scattering-matrix errors of the EMA start to grow with increasing the host size parameter and or the number of inclusions. We confirm, in particular, the existence of the effective-medium regime in the important case of dust aerosols with hematite or air-bubble inclusions, but then the large refractive-index contrast necessitates inclusion size parameters of the order of a few tenths. Irrespective of the highly restricted conditions of applicability of the EMA, our results provide further evidence that the effective-medium regime must be a direct corollary of the macroscopic Maxwell equations under specific assumptions.

  5. An exploration in acoustic radiation force experienced by cylindrical shells via resonance scattering theory.

    PubMed

    Rajabi, Majid; Behzad, Mehdi

    2014-04-01

    In nonlinear acoustic regime, a body insonified by a sound field is known to experience a steady force that is called the acoustic radiation force (RF). This force is a second-order quantity of the velocity potential function of the ambient medium. Exploiting the sufficiency of linear solution representation of potential function in RF formulation, and following the classical resonance scattering theorem (RST) which suggests the scattered field as a superposition of the resonant field and a background (non-resonant) component, we will show that the radiation force is a composition of three components: background part, resonant part and their interaction. Due to the nonlinearity effects, each part contains the contribution of pure partial waves in addition to their mutual interaction. The numerical results propose the residue component (i.e., subtraction of the background component from the RF) as a good indicator of the contribution of circumferential surface waves in RF. Defining the modal series of radiation force function and its components, it will be shown that within each partial wave, the resonance contribution can be synthesized as the Breit-Wigner form for adequately none-close resonant frequencies. The proposed formulation may be helpful essentially due to its inherent value as a canonical subject in physical acoustics. Furthermore, it may make a tunnel through the circumferential resonance reducing effects on radiation forces. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Three-dimensional polarization states of monochromatic light fields.

    PubMed

    Azzam, R M A

    2011-11-01

    The 3×1 generalized Jones vectors (GJVs) [E(x) E(y) E(z)](t) (t indicates the transpose) that describe the linear, circular, and elliptical polarization states of an arbitrary three-dimensional (3-D) monochromatic light field are determined in terms of the geometrical parameters of the 3-D vibration of the time-harmonic electric field. In three dimensions, there are as many distinct linear polarization states as there are points on the surface of a hemisphere, and the number of distinct 3-D circular polarization states equals that of all two-dimensional (2-D) polarization states on the Poincaré sphere, of which only two are circular states. The subset of 3-D polarization states that results from the superposition of three mutually orthogonal x, y, and z field components of equal amplitude is considered as a function of their relative phases. Interesting contours of equal ellipticity and equal inclination of the normal to the polarization ellipse with respect to the x axis are obtained in 2-D phase space. Finally, the 3×3 generalized Jones calculus, in which elastic scattering (e.g., by a nano-object in the near field) is characterized by the 3-D linear transformation E(s)=T E(i), is briefly introduced. In such a matrix transformation, E(i) and E(s) are the 3×1 GJVs of the incident and scattered waves and T is the 3×3 generalized Jones matrix of the scatterer at a given frequency and for given directions of incidence and scattering.

  7. Investigation on the Accuracy of Superposition Predictions of Film Cooling Effectiveness

    NASA Astrophysics Data System (ADS)

    Meng, Tong; Zhu, Hui-ren; Liu, Cun-liang; Wei, Jian-sheng

    2018-05-01

    Film cooling effectiveness on flat plates with double rows of holes has been studied experimentally and numerically in this paper. This configuration is widely used to simulate the multi-row film cooling on turbine vane. Film cooling effectiveness of double rows of holes and each single row was used to study the accuracy of superposition predictions. Method of stable infrared measurement technique was used to measure the surface temperature on the flat plate. This paper analyzed the factors that affect the film cooling effectiveness including hole shape, hole arrangement, row-to-row spacing and blowing ratio. Numerical simulations were performed to analyze the flow structure and film cooling mechanisms between each film cooling row. Results show that the blowing ratio within the range of 0.5 to 2 has a significant influence on the accuracy of superposition predictions. At low blowing ratios, results obtained by superposition method agree well with the experimental data. While at high blowing ratios, the accuracy of superposition prediction decreases. Another significant factor is hole arrangement. Results obtained by superposition prediction are nearly the same as experimental values of staggered arrangement structures. For in-line configurations, the superposition values of film cooling effectiveness are much higher than experimental data. For different hole shapes, the accuracy of superposition predictions on converging-expanding holes is better than cylinder holes and compound angle holes. For two different hole spacing structures in this paper, predictions show good agreement with the experiment results.

  8. Quantum superposition at the half-metre scale.

    PubMed

    Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A

    2015-12-24

    The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.

  9. Thermalization as an invisibility cloak for fragile quantum superpositions

    NASA Astrophysics Data System (ADS)

    Hahn, Walter; Fine, Boris V.

    2017-07-01

    We propose a method for protecting fragile quantum superpositions in many-particle systems from dephasing by external classical noise. We call superpositions "fragile" if dephasing occurs particularly fast, because the noise couples very differently to the superposed states. The method consists of letting a quantum superposition evolve under the internal thermalization dynamics of the system, followed by a time-reversal manipulation known as Loschmidt echo. The thermalization dynamics makes the superposed states almost indistinguishable during most of the above procedure. We validate the method by applying it to a cluster of spins ½.

  10. Design and simulation of a superposition compound eye system based on hybrid diffractive-refractive lenses.

    PubMed

    Zhang, Shuqing; Zhou, Luyang; Xue, Changxi; Wang, Lei

    2017-09-10

    Compound eyes offer a promising field of miniaturized imaging systems. In one application of a compound eye, superposition of compound eye systems forms a composite image by superposing the images produced by different channels. The geometric configuration of superposition compound eye systems is achieved by three micro-lens arrays with different pitches and focal lengths. High resolution is indispensable for the practicability of superposition compound eye systems. In this paper, hybrid diffractive-refractive lenses are introduced into the design of a compound eye system for this purpose. With the help of ZEMAX, two superposition compound eye systems with and without hybrid diffractive-refractive lenses were separately designed. Then, we demonstrate the effectiveness of using a hybrid diffractive-refractive lens to improve the image quality.

  11. Supernova Neutrino Opacity from Nucleon-Nucleon Bremsstrahlung and Related Processes

    NASA Astrophysics Data System (ADS)

    Hannestad, Steen; Raffelt, Georg

    1998-11-01

    Elastic scattering on nucleons, νN --> Nν, is the dominant supernova (SN) opacity source for μ and τ neutrinos. The dominant energy- and number-changing processes were thought to be νe- --> e-ν and νν¯<-->e+e- until Suzuki showed that the bremsstrahlung process νν¯NN<-->NN was actually more important. We find that for energy exchange, the related ``inelastic scattering process'' νNN<-->NNν is even more effective by about a factor of 10. A simple estimate implies that the νμ and ντ spectra emitted during the Kelvin-Helmholtz cooling phase are much closer to that of ν¯e than had been thought previously. To facilitate a numerical study of the spectra formation we derive a scattering kernel that governs both bremsstrahlung and inelastic scattering and give an analytic approximation formula. We consider only neutron-neutron interactions; we use a one-pion exchange potential in Born approximation, nonrelativistic neutrons, and the long-wavelength limit, simplifications that appear justified for the surface layers of an SN core. We include the pion mass in the potential, and we allow for an arbitrary degree of neutron degeneracy. Our treatment does not include the neutron-proton process and does not include nucleon-nucleon correlations. Our perturbative approach applies only to the SN surface layers, i.e., to densities below about 1014 g cm-3.

  12. Antecedent Synoptic Environments Conducive to North American Polar/Subtropical Jet Superpositions

    NASA Astrophysics Data System (ADS)

    Winters, A. C.; Keyser, D.; Bosart, L. F.

    2017-12-01

    The atmosphere often exhibits a three-step pole-to-equator tropopause structure, with each break in the tropopause associated with a jet stream. The polar jet stream (PJ) typically resides in the break between the polar and subtropical tropopause and is positioned atop the strongly baroclinic, tropospheric-deep polar front around 50°N. The subtropical jet stream (STJ) resides in the break between the subtropical and the tropical tropopause and is situated on the poleward edge of the Hadley cell around 30°N. On occasion, the latitudinal separation between the PJ and the STJ can vanish, resulting in a vertical jet superposition. Prior case study work indicates that jet superpositions are often attended by a vigorous transverse vertical circulation that can directly impact the production of extreme weather over North America. Furthermore, this work suggests that there is considerable variability among antecedent environments conducive to the production of jet superpositions. These considerations motivate a comprehensive study to examine the synoptic-dynamic mechanisms that operate within the double-jet environment to produce North American jet superpositions. This study focuses on the identification of North American jet superposition events in the CFSR dataset during November-March 1979-2010. Superposition events will be classified into three characteristic types: "Polar Dominant" events will consist of events during which only the PJ is characterized by a substantial excursion from its climatological latitude band; "Subtropical Dominant" events will consist of events during which only the STJ is characterized by a substantial excursion from its climatological latitude band; and "Hybrid" events will consist of those events characterized by an excursion of both the PJ and STJ from their climatological latitude bands. Following their classification, frequency distributions of jet superpositions will be constructed to highlight the geographical locations most often associated with jet superpositions for each event type. PV inversion and composite analysis will also be performed on each event type in an effort to illustrate the antecedent environments and the dominant synoptic-dynamic mechanisms that favor the production of North American jet superpositions for each event type.

  13. Modularized seismic full waveform inversion based on waveform sensitivity kernels - The software package ASKI

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel

    2015-04-01

    We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.

  14. On magnetic structure of CuFe 2Ge 2: Constrains from the 57Fe Mössbauer spectroscopy

    DOE PAGES

    Bud’ko, Sergey L.; Jo, Na Hyun; Downing, Savannah S.; ...

    2017-09-20

    57Fe Mössbauer spectroscopy measurements were performed on a powdered CuFe 2Ge 2 sample that orders antiferromagnetically at ~175 K. Whereas a paramagnetic doublet was observed above the Néel temperature, a superposition of paramagnetic doublet and magnetic sextet (in approximately 0.5:0.5 ratio) was observed in the magnetically ordered state, suggesting a magnetic structure similar to a double-Q spin density wave with half of the Fe paramagnetic and another half bearing static moment of ~0.5–1μ B. Lastly, these results call for a re-evaluation of the recent neutron scattering data and band structure calculations, as well as for deeper examination of details ofmore » sample preparation techniques.« less

  15. Nonlinear propagation of vector extremely short pulses in a medium of symmetric and asymmetric molecules

    NASA Astrophysics Data System (ADS)

    Sazonov, S. V.; Ustinov, N. V.

    2017-02-01

    The nonlinear propagation of extremely short electromagnetic pulses in a medium of symmetric and asymmetric molecules placed in static magnetic and electric fields is theoretically studied. Asymmetric molecules differ in that they have nonzero permanent dipole moments in stationary quantum states. A system of wave equations is derived for the ordinary and extraordinary components of pulses. It is shown that this system can be reduced in some cases to a system of coupled Ostrovsky equations and to the equation intagrable by the method for an inverse scattering transformation, including the vector version of the Ostrovsky-Vakhnenko equation. Different types of solutions of this system are considered. Only solutions representing the superposition of periodic solutions are single-valued, whereas soliton and breather solutions are multivalued.

  16. Non-classical State via Superposition of Two Opposite Coherent States

    NASA Astrophysics Data System (ADS)

    Ren, Gang; Du, Jian-ming; Yu, Hai-jun

    2018-04-01

    We study the non-classical properties of the states generated by superpositions of two opposite coherent states with the arbitrary relative phase factors. We show that the relative phase factors plays an important role in these superpositions. We demonstrate this result by discussing their squeezing properties, quantum statistical properties and fidelity in principle.

  17. Ultrafast creation of large Schrödinger cat states of an atom.

    PubMed

    Johnson, K G; Wong-Campos, J D; Neyenhuis, B; Mizrahi, J; Monroe, C

    2017-09-26

    Mesoscopic quantum superpositions, or Schrödinger cat states, are widely studied for fundamental investigations of quantum measurement and decoherence as well as applications in sensing and quantum information science. The generation and maintenance of such states relies upon a balance between efficient external coherent control of the system and sufficient isolation from the environment. Here we create a variety of cat states of a single trapped atom's motion in a harmonic oscillator using ultrafast laser pulses. These pulses produce high fidelity impulsive forces that separate the atom into widely separated positions, without restrictions that typically limit the speed of the interaction or the size and complexity of the resulting motional superposition. This allows us to quickly generate and measure cat states larger than previously achieved in a harmonic oscillator, and create complex multi-component superposition states in atoms.Generation of mesoscopic quantum superpositions requires both reliable coherent control and isolation from the environment. Here, the authors succeed in creating a variety of cat states of a single trapped atom, mapping spin superpositions into spatial superpositions using ultrafast laser pulses.

  18. Simultaneous inversion of intrinsic and scattering attenuation parameters incorporating multiple scattering effect

    NASA Astrophysics Data System (ADS)

    Ogiso, M.

    2017-12-01

    Heterogeneous attenuation structure is important for not only understanding the earth structure and seismotectonics, but also ground motion prediction. Attenuation of ground motion in high frequency range is often characterized by the distribution of intrinsic and scattering attenuation parameters (intrinsic Q and scattering coefficient). From the viewpoint of ground motion prediction, both intrinsic and scattering attenuation affect the maximum amplitude of ground motion while scattering attenuation also affect the duration time of ground motion. Hence, estimation of both attenuation parameters will lead to sophisticate the ground motion prediction. In this study, we try to estimate both parameters in southwestern Japan in a tomographic manner. We will conduct envelope fitting of seismic coda since coda has sensitivity to both intrinsic attenuation and scattering coefficients. Recently, Takeuchi (2016) successfully calculated differential envelope when these parameters have fluctuations. We adopted his equations to calculate partial derivatives of these parameters since we did not need to assume homogeneous velocity structure. Matrix for inversion of structural parameters would become too huge to solve in a straightforward manner. Hence, we adopted ART-type Bayesian Reconstruction Method (Hirahara, 1998) to project the difference of envelopes to structural parameters iteratively. We conducted checkerboard reconstruction test. We assumed checkerboard pattern of 0.4 degree interval in horizontal direction and 20 km in depth direction. Reconstructed structures well reproduced the assumed pattern in shallower part while not in deeper part. Since the inversion kernel has large sensitivity around source and stations, resolution in deeper part would be limited due to the sparse distribution of earthquakes. To apply the inversion method which described above to actual waveforms, we have to correct the effects of source and site amplification term. We consider these issues to estimate the actual intrinsic and scattering structures of the target region.Acknowledgment We used the waveforms of Hi-net, NIED. This study was supported by the Earthquake Research Institute of the University of Tokyo cooperative research program.

  19. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  20. Practical purification scheme for decohered coherent-state superpositions via partial homodyne detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Shigenari; Department of Electronics and Electrical Engineering, Keio University, 3-14-1, Hiyoshi, Kohoku-ku, Yokohama, 223-8522; Takeoka, Masahiro

    2006-04-15

    We present a simple protocol to purify a coherent-state superposition that has undergone a linear lossy channel. The scheme constitutes only a single beam splitter and a homodyne detector, and thus is experimentally feasible. In practice, a superposition of coherent states is transformed into a classical mixture of coherent states by linear loss, which is usually the dominant decoherence mechanism in optical systems. We also address the possibility of producing a larger amplitude superposition state from decohered states, and show that in most cases the decoherence of the states are amplified along with the amplitude.

  1. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.

    1987-01-01

    The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.

  2. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, T.E.; Franke, O.L.; Bennett, G.D.

    1984-01-01

    The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)

  3. Radiative Heat Transfer in Finite Cylindrical Enclosures with Nonhomogeneous Participating Media

    NASA Technical Reports Server (NTRS)

    Hsu, Pei-Feng; Ku, Jerry C.

    1994-01-01

    Results of a numerical solution for radiative heat transfer in homogeneous and nonhomogeneous participating media are presented. The geometry of interest is a finite axisymmetric cylindrical enclosure. The integral formulation for radiative transport is solved by the YIX method. A three-dimensional solution scheme is applied to two-dimensional axisymmetric geometry to simplify kernel calculations and to avoid difficulties associated with treating boundary conditions. As part of the effort to improve modeling capabilities for turbulent jet diffusion flames, predicted distributions for flame temperature and soot volume fraction are used to calculate radiative heat transfer from soot particles in such flames. It is shown that the nonhomogeneity of radiative property has very significant effects. The peak value of the divergence of radiative heat flux could be underestimated by 2 factor of 7 if a mean homogeneous radiative property is used. Since recent studies have shown that scattering by soot agglomerates is significant in flames, the effect of magnitude of scattering is also investigated and found to be nonnegligible.

  4. Global multiresolution models of surface wave propagation: comparing equivalently regularized Born and ray theoretical solutions

    NASA Astrophysics Data System (ADS)

    Boschi, Lapo

    2006-10-01

    I invert a large set of teleseismic phase-anomaly observations, to derive tomographic maps of fundamental-mode surface wave phase velocity, first via ray theory, then accounting for finite-frequency effects through scattering theory, in the far-field approximation and neglecting mode coupling. I make use of a multiple-resolution pixel parametrization which, in the assumption of sufficient data coverage, should be adequate to represent strongly oscillatory Fréchet kernels. The parametrization is finer over North America, a region particularly well covered by the data. For each surface-wave mode where phase-anomaly observations are available, I derive a wide spectrum of plausible, differently damped solutions; I then conduct a trade-off analysis, and select as optimal solution model the one associated with the point of maximum curvature on the trade-off curve. I repeat this exercise in both theoretical frameworks, to find that selected scattering and ray theoretical phase-velocity maps are coincident in pattern, and differ only slightly in amplitude.

  5. Data-driven sensitivity inference for Thomson scattering electron density measurement systems.

    PubMed

    Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro

    2017-01-01

    We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.

  6. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  7. Performance evaluation of four directional emissivity analytical models with thermal SAIL model and airborne images.

    PubMed

    Ren, Huazhong; Liu, Rongyuan; Yan, Guangjian; Li, Zhao-Liang; Qin, Qiming; Liu, Qiang; Nerry, Françoise

    2015-04-06

    Land surface emissivity is a crucial parameter in the surface status monitoring. This study aims at the evaluation of four directional emissivity models, including two bi-directional reflectance distribution function (BRDF) models and two gap-frequency-based models. Results showed that the kernel-driven BRDF model could well represent directional emissivity with an error less than 0.002, and was consequently used to retrieve emissivity with an accuracy of about 0.012 from an airborne multi-angular thermal infrared data set. Furthermore, we updated the cavity effect factor relating to multiple scattering inside canopy, which improved the performance of the gap-frequency-based models.

  8. McStas-model of the delft SESANS

    NASA Astrophysics Data System (ADS)

    Knudsen, E.; Udby, L.; Willendrup, P. K.; Lefmann, K.; Bouwman, W. G.

    2011-06-01

    We present simulation results taking first virtual data from a model of the Spin-Echo Small Angle Scattering (SESANS) instrument situated in Delft, in the framework of the McStas Monte Carlo software package. The main focus has been on making a model of the Delft SESANS instrument, and we can now present the first virtual data from it, using a refracting prism-like sample model. In consequence, polarisation instrumentation is now included natively in the McStas kernel, including options for magnetic fields and a number of utility components. This development has brought us to a point where realistic models of polarisation-enabled instrumentation can be built.

  9. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  10. On possibility of time reversal symmetry violation in neutrino elastic scattering on polarized electron target

    NASA Astrophysics Data System (ADS)

    Sobków, W.; Błaut, A.

    2018-03-01

    In this paper we indicate a possibility of utilizing the elastic scattering of Dirac low-energy (˜ 1 MeV) electron neutrinos (ν _es) on a polarized electron target (PET) in testing the time reversal symmetry violation (TRSV). We consider a scenario in which the incoming ν _e beam is a superposition of left chiral (LC) and right chiral (RC) states. LC ν _e interact mainly by the standard V-A and small admixture of non-standard scalar S_L, pseudoscalar P_L, tensor T_L interactions, while RC ones are only detected by the exotic V + A and S_R, P_R, T_R interactions. As a result of the superposition of the two chiralities the transverse components of ν e spin polarization (T-even and T-odd) may appear. We compute the differential cross section as a function of the recoil electron azimuthal angle and scattered electron energy, and show how the interference terms between standard V-A and exotic S_R, P_R, T_R couplings depend on the various angular correlations among the transversal ν _e spin polarization, the polarization of the electron target, the incoming neutrino momentum and the outgoing electron momentum in the limit of relativistic ν _e. We illustrate how the maximal value of recoil electrons azimuthal asymmetry and the asymmetry axis location of outgoing electrons depend on the azimuthal angle of the transversal component of the ν _e spin polarization, both for the time reversal symmetry conservation (TRSC) and TRSV. Next, we display that the electron energy spectrum and polar angle distribution of the recoil electrons are also sensitive to the interference terms between V-A and S_R, P_R, T_R couplings, proportional to the T-even and T-odd angular correlations among the transversal ν _e polarization, the electron polarization of the target, and the incoming ν _e momentum, respectively. We also discuss the possibility of testing the TRSV by observing the azimuthal asymmetry of outgoing electrons, using the PET without the impact of the transversal ν polarization related to the production process. In this scenario the predicted effects depend only on the interferences between S_R and T_R couplings. Our model-independent analysis is carried out for the flavor ν _e. To make such tests feasible, the intense (polarized) artificial ν _e source, PET and the appropriate detector measuring the directionality of the outgoing electrons and/or the recoil electrons energy with a high resolution have to be identified.

  11. A systematic approach to sketch Bethe-Salpeter equation

    NASA Astrophysics Data System (ADS)

    Qin, Si-xue

    2016-03-01

    To study meson properties, one needs to solve the gap equation for the quark propagator and the Bethe-Salpeter (BS) equation for the meson wavefunction, self-consistently. The gluon propagator, the quark-gluon vertex, and the quark-anti-quark scattering kernel are key pieces to solve those equations. Predicted by lattice-QCD and Dyson-Schwinger analyses of QCD's gauge sector, gluons are non-perturbatively massive. In the matter sector, the modeled gluon propagator which can produce a veracious description of meson properties needs to possess a mass scale, accordingly. Solving the well-known longitudinal Ward-Green-Takahashi identities (WGTIs) and the less-known transverse counterparts together, one obtains a nontrivial solution which can shed light on the structure of the quark-gluon vertex. It is highlighted that the phenomenologically proposed anomalous chromomagnetic moment (ACM) vertex originates from the QCD Lagrangian symmetries and its strength is proportional to the magnitude of dynamical chiral symmetry breaking (DCSB). The color-singlet vector and axial-vector WGTIs can relate the BS kernel and the dressed quark-gluon vertex to each other. Using the relation, one can truncate the gap equation and the BS equation, systematically, without violating crucial symmetries, e.g., gauge symmetry and chiral symmetry.

  12. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  13. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  14. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize.

    PubMed

    Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.

  15. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize

    PubMed Central

    Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143

  16. Teleportation of Unknown Superpositions of Collective Atomic Coherent States

    NASA Astrophysics Data System (ADS)

    Zheng, Shi-Biao

    2001-06-01

    We propose a scheme to teleport an unknown superposition of two atomic coherent states with different phases. Our scheme is based on resonant and dispersive atom-field interaction. Our scheme provides a possibility of teleporting macroscopic superposition states of many atoms first time. The project supported by National Natural Science Foundation of China under Grant No. 60008003

  17. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    ERIC Educational Resources Information Center

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  18. Nonclassical Properties of Q-Deformed Superposition Light Field State

    NASA Technical Reports Server (NTRS)

    Ren, Min; Shenggui, Wang; Ma, Aiqun; Jiang, Zhuohong

    1996-01-01

    In this paper, the squeezing effect, the bunching effect and the anti-bunching effect of the superposition light field state which involving q-deformation vacuum state and q-Glauber coherent state are studied, the controllable q-parameter of the squeezing effect, the bunching effect and the anti-bunching effect of q-deformed superposition light field state are obtained.

  19. A Practical Cone-beam CT Scatter Correction Method with Optimized Monte Carlo Simulations for Image-Guided Radiation Therapy

    PubMed Central

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-01-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 HU to 3 HU and from 78 HU to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 sec including the time for both the scatter estimation and CBCT reconstruction steps. The efficacy of our method and its high computational efficiency make our method attractive for clinical use. PMID:25860299

  20. Spectrally enhanced image resolution of tooth enamel surfaces

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Nelson, Leonard Y.; Berg, Joel H.; Seibel, Eric J.

    2012-01-01

    Short-wavelength 405 nm laser illumination of surface dental enamel using an ultrathin scanning fiber endoscope (SFE) produced enhanced detail of dental topography. The surfaces of human extracted teeth and artificial erosions were imaged with 405 nm, 444 nm, 532 nm, or 635 nm illumination lasers. The obtained images were then processed offline to compensate for any differences in the illumination beam diameters between the different lasers. Scattering and absorption coefficients for a Monte Carlo model of light propagation in dental enamel for 405 nm were scaled from published data at 532 nm and 633 nm. The value of the scattering coefficient used in the model was scaled from the coefficients at 532 nm and 633 nm by the inverse third power of wavelength. Simulations showed that the penetration depth of short-wavelength illumination is localized close to the enamel surface, while long-wavelength illumination travels much further and is backscattered from greater depths. Therefore, images obtained using short wavelength laser are not contaminated by the superposition of light reflected from enamel tissue at greater depths. Hence, the SFE with short-wavelength illumination may make it possible to visualize surface manifestations of phenomena such as demineralization, thus better aiding the clinician in the detection of early caries.

  1. Quantitative Laser-Saturated Fluorescence Measurements of Nitric Oxide in a Heptane Spray Flame

    NASA Technical Reports Server (NTRS)

    Cooper, Clayton S.; Laurendeau, Normand M.; Lee, Chi (Technical Monitor)

    1997-01-01

    We report spatially resolved laser-saturated fluorescence measurements of NO concentration in a pre-heated, lean-direct injection (LDI) spray flame at atmospheric pressure. The spray is produced by a hollow-cone, pressure-atomized nozzle supplied with liquid heptane. NO is excited via the Q2(26.5) transition of the gamma(0,0) band. Detection is performed in a 2-nm region centered on the gamma(0,1) band. Because of the relatively close spectral spacing between the excitation (226 nm) and detection wavelengths (236 nm), the gamma(0,1) band of NO cannot be isolated from the spectral wings of the Mie scattering signal produced by the spray. To account for the resulting superposition of the fluorescence and scattering signals, a background subtraction method has been developed that utilizes a nearby non-resonant wavelength. Excitation scans have been performed to locate the optimum off-line wavelength. Detection scans have been performed at problematic locations in the flame to determine possible fluorescence interferences from UHCs and PAHs at both the on-line and off-line excitation wavelengths. Quantitative radial NO profiles are presented and analyzed so as to better understand the operation of lean-direct injectors for gas turbine combustors.

  2. Numerical research on the lateral global buckling characteristics of a high temperature and pressure pipeline with two initial imperfections

    PubMed Central

    Liu, Wenbin; Liu, Aimin

    2018-01-01

    With the exploitation of offshore oil and gas gradually moving to deep water, higher temperature differences and pressure differences are applied to the pipeline system, making the global buckling of the pipeline more serious. For unburied deep-water pipelines, the lateral buckling is the major buckling form. The initial imperfections widely exist in the pipeline system due to manufacture defects or the influence of uneven seabed, and the distribution and geometry features of initial imperfections are random. They can be divided into two kinds based on shape: single-arch imperfections and double-arch imperfections. This paper analyzed the global buckling process of a pipeline with 2 initial imperfections by using a numerical simulation method and revealed how the ratio of the initial imperfection’s space length to the imperfection’s wavelength and the combination of imperfections affects the buckling process. The results show that a pipeline with 2 initial imperfections may suffer the superposition of global buckling. The growth ratios of buckling displacement, axial force and bending moment in the superposition zone are several times larger than no buckling superposition pipeline. The ratio of the initial imperfection’s space length to the imperfection’s wavelength decides whether a pipeline suffers buckling superposition. The potential failure point of pipeline exhibiting buckling superposition is as same as the no buckling superposition pipeline, but the failure risk of pipeline exhibiting buckling superposition is much higher. The shape and direction of two nearby imperfections also affects the failure risk of pipeline exhibiting global buckling superposition. The failure risk of pipeline with two double-arch imperfections is higher than pipeline with two single-arch imperfections. PMID:29554123

  3. On the Use of Material-Dependent Damping in ANSYS for Mode Superposition Transient Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, J.; Wei, X.

    The mode superposition method is often used for dynamic analysis of complex structures, such as the seismic Category I structures in nuclear power plants, in place of the less efficient full method, which uses the full system matrices for calculation of the transient responses. In such applications, specification of material-dependent damping is usually desirable because complex structures can consist of multiple types of materials that may have different energy dissipation capabilities. A recent review of the ANSYS manual for several releases found that the use of material-dependent damping is not clearly explained for performing a mode superposition transient dynamic analysis.more » This paper includes several mode superposition transient dynamic analyses using different ways to specify damping in ANSYS, in order to determine how material-dependent damping can be specified conveniently in a mode superposition transient dynamic analysis.« less

  4. Testing the quantum superposition principle: matter waves and beyond

    NASA Astrophysics Data System (ADS)

    Ulbricht, Hendrik

    2015-05-01

    New technological developments allow to explore the quantum properties of very complex systems, bringing the question of whether also macroscopic systems share such features, within experimental reach. The interest in this question is increased by the fact that, on the theory side, many suggest that the quantum superposition principle is not exact, departures from it being the larger, the more macroscopic the system. Testing the superposition principle intrinsically also means to test suggested extensions of quantum theory, so-called collapse models. We will report on three new proposals to experimentally test the superposition principle with nanoparticle interferometry, optomechanical devices and by spectroscopic experiments in the frequency domain. We will also report on the status of optical levitation and cooling experiments with nanoparticles in our labs, towards an Earth bound matter-wave interferometer to test the superposition principle for a particle mass of one million amu (atomic mass unit).

  5. 7 CFR 810.802 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...

  6. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  7. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  8. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  9. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  10. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  11. Quantum state engineering by a coherent superposition of photon subtraction and addition

    NASA Astrophysics Data System (ADS)

    Lee, Su-Yong; Nha, Hyunchul

    2011-10-01

    We study a coherent superposition tâ+r↠of field annihilation and creation operator acting on continuous variable systems and propose its application for quantum state engineering. We propose an experimental scheme to implement this elementary coherent operation and discuss its usefulness to produce an arbitrary superposition of number states involving up to two photons.

  12. Comparison of linear and square superposition hardening models for the surface nanoindentation of ion-irradiated materials

    NASA Astrophysics Data System (ADS)

    Xiao, Xiazi; Yu, Long

    2018-05-01

    Linear and square superposition hardening models are compared for the surface nanoindentation of ion-irradiated materials. Hardening mechanisms of both dislocations and defects within the plasticity affected region (PAR) are considered. Four sets of experimental data for ion-irradiated materials are adopted to compare with theoretical results of the two hardening models. It is indicated that both models describe experimental data equally well when the PAR is within the irradiated layer; whereas, when the PAR is beyond the irradiated region, the square superposition hardening model performs better. Therefore, the square superposition model is recommended to characterize the hardening behavior of ion-irradiated materials.

  13. Composite and case study analyses of the large-scale environments associated with West Pacific Polar and subtropical vertical jet superposition events

    NASA Astrophysics Data System (ADS)

    Handlos, Zachary J.

    Though considerable research attention has been devoted to examination of the Northern Hemispheric polar and subtropical jet streams, relatively little has been directed toward understanding the circumstances that conspire to produce the relatively rare vertical superposition of these usually separate features. This dissertation investigates the structure and evolution of large-scale environments associated with jet superposition events in the northwest Pacific. An objective identification scheme, using NCEP/NCAR Reanalysis 1 data, is employed to identify all jet superpositions in the west Pacific (30-40°N, 135-175°E) for boreal winters (DJF) between 1979/80 - 2009/10. The analysis reveals that environments conducive to west Pacific jet superposition share several large-scale features usually associated with East Asian Winter Monsoon (EAWM) northerly cold surges, including the presence of an enhanced Hadley Cell-like circulation within the jet entrance region. It is further demonstrated that several EAWM indices are statistically significantly correlated with jet superposition frequency in the west Pacific. The life cycle of EAWM cold surges promotes interaction between tropical convection and internal jet dynamics. Low potential vorticity (PV), high theta e tropical boundary layer air, exhausted by anomalous convection in the west Pacific lower latitudes, is advected poleward towards the equatorward side of the jet in upper tropospheric isentropic layers resulting in anomalous anticyclonic wind shear that accelerates the jet. This, along with geostrophic cold air advection in the left jet entrance region that drives the polar tropopause downward through the jet core, promotes the development of the deep, vertical PV wall characteristic of superposed jets. West Pacific jet superpositions preferentially form within an environment favoring the aforementioned characteristics regardless of EAWM seasonal strength. Post-superposition, it is shown that the west Pacific jet extends eastward and is associated with an upper tropospheric cyclonic (anticyclonic) anomaly in its left (right) exit region. A downstream ridge is present over northwest Canada, and within the strong EAWM environment, a wavier flow over North America is observed relative to the neutral EAWM environment. Preliminary investigation of the two weak EAWM season superpositions reveals a Kona Low type feature post-superposition. This is associated with anomalous convection reminiscent of an atmospheric river southwest of Mexico.

  14. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  15. Linearized inversion of multiple scattering seismic energy

    NASA Astrophysics Data System (ADS)

    Aldawood, Ali; Hoteit, Ibrahim; Zuberi, Mohammad

    2014-05-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. So, imaging seismic data with the single-scattering assumption does not locate multiple bounces events in their actual subsurface positions. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single scattering energy such as nearly vertical faults. Standard migration of these multiples provides subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. The resultant image obtained by the adjoint operator is a smoothed depiction of the true subsurface reflectivity model and is heavily masked by migration artifacts and the source wavelet fingerprint that needs to be properly deconvolved. Hence, we proposed a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. The proposed algorithm uses the least-square image based on single-scattering assumption as a constraint to invert for the part of the image that is illuminated by internal scattering energy. Then, we posed the problem of imaging double-scattering energy as a least-square minimization problem that requires solving the normal equation of the following form: GTGv = GTd, (1) where G is a linearized forward modeling operator that predicts double-scattered seismic data. Also, GT is a linearized adjoint operator that image double-scattered seismic data. Gradient-based optimization algorithms solve this linear system. Hence, we used a quasi-Newton optimization technique to find the least-square minimizer. In this approach, an estimate of the Hessian matrix that contains curvature information is modified at every iteration by a low-rank update based on gradient changes at every step. At each iteration, the data residual is imaged using GT to determine the model update. Application of the linearized inversion to synthetic data to image a vertical fault plane demonstrate the effectiveness of this methodology to properly delineate the vertical fault plane and give better amplitude information than the standard migrated image using the adjoint operator that takes into account internal multiples. Thus, least-square imaging of multiple scattering enhances the spatial resolution of the events illuminated by internal scattering energy. It also deconvolves the source signature and helps remove the fingerprint of the acquisition geometry. The final image is obtained by the superposition of the least-square solution based on single scattering assumption and the least-square solution based on double scattering assumption.

  16. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  17. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  18. Litigated Metal Clusters - Structures, Energy and Reactivity

    DTIC Science & Technology

    2016-04-01

    projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate cross sections for elongated molecules...superposition approximation ( PSA ) is now complete. We have made it available free of charge to the scientific community on a dedicated website at UCSB. We...by AFOSR. We continued to improve the projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate

  19. Multichannel Polarization-Controllable Superpositions of Orbital Angular Momentum States.

    PubMed

    Yue, Fuyong; Wen, Dandan; Zhang, Chunmei; Gerardot, Brian D; Wang, Wei; Zhang, Shuang; Chen, Xianzhong

    2017-04-01

    A facile metasurface approach is shown to realize polarization-controllable multichannel superpositions of orbital angular momentum (OAM) states with various topological charges. By manipulating the polarization state of the incident light, four kinds of superpositions of OAM states are realized using a single metasurface consisting of space-variant arrays of gold nanoantennas. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  1. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  2. An effective field theory for forward scattering and factorization violation

    DOE PAGES

    Rothstein, Ira Z.; Stewart, Iain W.

    2016-08-03

    Starting with QCD, we derive an effective field theory description for forward scattering and factorization violation as part of the soft-collinear effective field theory (SCET) for high energy scattering. These phenomena are mediated by long distance Glauber gluon exchanges, which are static in time, localized in the longitudinal distance, and act as a kernel for forward scattering where |t| << s. In hard scattering, Glauber gluons can induce corrections which invalidate factorization. With SCET, Glauber exchange graphs can be calculated explicitly, and are distinct from graphs involving soft, collinear, or ultrasoft gluons. We derive a complete basis of operators whichmore » describe the leading power effects of Glauber exchange. Key ingredients include regulating light-cone rapidity singularities and subtractions which prevent double counting. Our results include a novel all orders gauge invariant pure glue soft operator which appears between two collinear rapidity sectors. The 1-gluon Feynman rule for the soft operator coincides with the Lipatov vertex, but it also contributes to emissions with ≥ 2 soft gluons. Our Glauber operator basis is derived using tree level and one-loop matching calculations from full QCD to both SCET II and SCET I. The one-loop amplitude’s rapidity renormalization involves mixing of color octet operators and yields gluon Reggeization at the amplitude level. The rapidity renormalization group equation for the leading soft and collinear functions in the forward scattering cross section are each given by the BFKL equation. Various properties of Glauber gluon exchange in the context of both forward scattering and hard scattering factorization are described. For example, we derive an explicit rule for when eikonalization is valid, and provide a direct connection to the picture of multiple Wilson lines crossing a shockwave. In hard scattering operators Glauber subtractions for soft and collinear loop diagrams ensure that we are not sensitive to the directions for soft and collinear Wilson lines. Conversely, certain Glauber interactions can be absorbed into these soft and collinear Wilson lines by taking them to be in specific directions. Finally, we also discuss criteria for factorization violation.« less

  3. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  4. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  5. Coalescence of repelling colloidal droplets: a route to monodisperse populations.

    PubMed

    Roger, Kevin; Botet, Robert; Cabane, Bernard

    2013-05-14

    Populations of droplets or particles dispersed in a liquid may evolve through Brownian collisions, aggregation, and coalescence. We have found a set of conditions under which these populations evolve spontaneously toward a narrow size distribution. The experimental system consists of poly(methyl methacrylate) (PMMA) nanodroplets dispersed in a solvent (acetone) + nonsolvent (water) mixture. These droplets carry electrical charges, located on the ionic end groups of the macromolecules. We used time-resolved small angle X-ray scattering to determine their size distribution. We find that the droplets grow through coalescence events: the average radius (R) increases logarithmically with elapsed time while the relative width σR/(R) of the distribution decreases as the inverse square root of (R). We interpret this evolution as resulting from coalescence events that are hindered by ionic repulsions between droplets. We generalize this evolution through a simulation of the Smoluchowski kinetic equation, with a kernel that takes into account the interactions between droplets. In the case of vanishing or attractive interactions, all droplet encounters lead to coalescence. The corresponding kernel leads to the well-known "self-preserving" particle distribution of the coalescence process, where σR/(R) increases to a plateau value. However, for droplets that interact through long-range ionic repulsions, "large + small" droplet encounters are more successful at coalescence than "large + large" encounters. We show that the corresponding kernel leads to a particular scaling of the droplet-size distribution-known as the "second-scaling law" in the theory of critical phenomena, where σR/(R) decreases as 1/√(R) and becomes independent of the initial distribution. We argue that this scaling explains the narrow size distributions of colloidal dispersions that have been synthesized through aggregation processes.

  6. A high-order strong stability preserving Runge-Kutta method for three-dimensional full waveform modeling and inversion of anelastic models

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Yang, D.; Bao, X.; Li, J.; Zhang, W.

    2017-12-01

    Accurate and efficient forward modeling methods are important for high resolution full waveform inversion. Compared with the elastic case, solving anelastic wave equation requires more computational time, because of the need to compute additional material-independent anelastic functions. A numerical scheme with a large Courant-Friedrichs-Lewy (CFL) condition number enables us to use a large time step to simulate wave propagation, which improves computational efficiency. In this work, we apply the fourth-order strong stability preserving Runge-Kutta method with an optimal CFL coeffiecient to solve the anelastic wave equation. We use a fourth order DRP/opt MacCormack scheme for the spatial discretization, and we approximate the rheological behaviors of the Earth by using the generalized Maxwell body model. With a larger CFL condition number, we find that the computational efficient is significantly improved compared with the traditional fourth-order Runge-Kutta method. Then, we apply the scattering-integral method for calculating travel time and amplitude sensitivity kernels with respect to velocity and attenuation structures. For each source, we carry out one forward simulation and save the time-dependent strain tensor. For each station, we carry out three `backward' simulations for the three components and save the corresponding strain tensors. The sensitivity kernels at each point in the medium are the convolution of the two sets of the strain tensors. Finally, we show several synthetic tests to verify the effectiveness of the strong stability preserving Runge-Kutta method in generating accurate synthetics in full waveform modeling, and in generating accurate strain tensors for calculating sensitivity kernels at regional and global scales.

  7. Transition operators in acoustic-wave diffraction theory. I - General theory. II - Short-wavelength behavior, dominant singularities of Zk0 and Zk0 exp -1

    NASA Technical Reports Server (NTRS)

    Hahne, G. E.

    1991-01-01

    A formal theory of the scattering of time-harmonic acoustic scalar waves from impenetrable, immobile obstacles is established. The time-independent formal scattering theory of nonrelativistic quantum mechanics, in particular the theory of the complete Green's function and the transition (T) operator, provides the model. The quantum-mechanical approach is modified to allow the treatment of acoustic-wave scattering with imposed boundary conditions of impedance type on the surface (delta-Omega) of an impenetrable obstacle. With k0 as the free-space wavenumber of the signal, a simplified expression is obtained for the k0-dependent T operator for a general case of homogeneous impedance boundary conditions for the acoustic wave on delta-Omega. All the nonelementary operators entering the expression for the T operator are formally simple rational algebraic functions of a certain invertible linear radiation impedance operator which maps any sufficiently well-behaved complex-valued function on delta-Omega into another such function on delta-Omega. In the subsequent study, the short-wavelength and the long-wavelength behavior of the radiation impedance operator and its inverse (the 'radiation admittance' operator) as two-point kernels on a smooth delta-Omega are studied for pairs of points that are close together.

  8. Nonlinear propagation of vector extremely short pulses in a medium of symmetric and asymmetric molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sazonov, S. V., E-mail: sazonov.sergey@gmail.com; Ustinov, N. V., E-mail: n-ustinov@mail.ru

    The nonlinear propagation of extremely short electromagnetic pulses in a medium of symmetric and asymmetric molecules placed in static magnetic and electric fields is theoretically studied. Asymmetric molecules differ in that they have nonzero permanent dipole moments in stationary quantum states. A system of wave equations is derived for the ordinary and extraordinary components of pulses. It is shown that this system can be reduced in some cases to a system of coupled Ostrovsky equations and to the equation intagrable by the method for an inverse scattering transformation, including the vector version of the Ostrovsky–Vakhnenko equation. Different types of solutionsmore » of this system are considered. Only solutions representing the superposition of periodic solutions are single-valued, whereas soliton and breather solutions are multivalued.« less

  9. Prediction of the acoustic pressure above periodically uneven facings in industrial workplaces

    NASA Astrophysics Data System (ADS)

    Ducourneau, J.; Bos, L.; Planeau, V.; Faiz, Adil; Skali Lami, Salah; Nejade, A.

    2010-05-01

    The aim of this work is to predict sound pressure in front of wall facings based on periodic sound scattering surface profiles. The method involves investigating plane wave reflections randomly incident upon an uneven surface. The waveguide approach is well suited to the geometries usually encountered in industrial workplaces. This method simplifies the profile geometry by using elementary rectangular volumes. The acoustic field in the profile interstices can then be expressed as the superposition of waveguide modes. In past work, walls considered are of infinite dimensions and are subjected to a periodic surface profile in only one direction. We therefore generalise this approach by extending its applicability to "double-periodic" wall facings. Free-field measurements have been taken and the observed agreement between numerical and experimental results supports the validity of the waveguide method.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkoff, T. J., E-mail: adidasty@gmail.com

    We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology formore » generating entanglement between spatially separated electromagnetic field modes.« less

  11. Space-variant polarization patterns of non-collinear Poincaré superpositions

    NASA Astrophysics Data System (ADS)

    Galvez, E. J.; Beach, K.; Zeosky, J. J.; Khajavi, B.

    2015-03-01

    We present analysis and measurements of the polarization patterns produced by non-collinear superpositions of Laguerre-Gauss spatial modes in orthogonal polarization states, which are known as Poincaré modes. Our findings agree with predictions (I. Freund Opt. Lett. 35, 148-150 (2010)), that superpositions containing a C-point lead to a rotation of the polarization ellipse in 3-dimensions. Here we do imaging polarimetry of superpositions of first- and zero-order spatial modes at relative beam angles of 0-4 arcmin. We find Poincaré-type polarization patterns showing fringes in polarization orientation, but which preserve the polarization-singularity index for all three cases of C-points: lemons, stars and monstars.

  12. Non-coaxial superposition of vector vortex beams.

    PubMed

    Aadhi, A; Vaity, Pravin; Chithrabhanu, P; Reddy, Salla Gangi; Prabakar, Shashi; Singh, R P

    2016-02-10

    Vector vortex beams are classified into four types depending upon spatial variation in their polarization vector. We have generated all four of these types of vector vortex beams by using a modified polarization Sagnac interferometer with a vortex lens. Further, we have studied the non-coaxial superposition of two vector vortex beams. It is observed that the superposition of two vector vortex beams with same polarization singularity leads to a beam with another kind of polarization singularity in their interaction region. The results may be of importance in ultrahigh security of the polarization-encrypted data that utilizes vector vortex beams and multiple optical trapping with non-coaxial superposition of vector vortex beams. We verified our experimental results with theory.

  13. Oblique superposition of two elliptically polarized lightwaves using geometric algebra: is energy-momentum conserved?

    PubMed

    Sze, Michelle Wynne C; Sugon, Quirino M; McNamara, Daniel J

    2010-11-01

    In this paper, we use Clifford (geometric) algebra Cl(3,0) to verify if electromagnetic energy-momentum density is still conserved for oblique superposition of two elliptically polarized plane waves with the same frequency. We show that energy-momentum conservation is valid at any time only for the superposition of two counter-propagating elliptically polarized plane waves. We show that the time-average energy-momentum of the superposition of two circularly polarized waves with opposite handedness is conserved regardless of the propagation directions of the waves. And, we show that the resulting momentum density of the superposed waves generally has a vector component perpendicular to the momentum densities of the individual waves.

  14. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  15. 7 CFR 810.2202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...

  16. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  17. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...

  18. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  19. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    PubMed

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Spectral and entropic characterizations of Wigner functions: applications to model vibrational systems.

    PubMed

    Luzanov, A V

    2008-09-07

    The Wigner function for the pure quantum states is used as an integral kernel of the non-Hermitian operator K, to which the standard singular value decomposition (SVD) is applied. It provides a set of the squared singular values treated as probabilities of the individual phase-space processes, the latter being described by eigenfunctions of KK(+) (for coordinate variables) and K(+)K (for momentum variables). Such a SVD representation is employed to obviate the well-known difficulties in the definition of the phase-space entropy measures in terms of the Wigner function that usually allows negative values. In particular, the new measures of nonclassicality are constructed in the form that automatically satisfies additivity for systems composed of noninteracting parts. Furthermore, the emphasis is given on the geometrical interpretation of the full entropy measure as the effective phase-space volume in the Wigner picture of quantum mechanics. The approach is exemplified by considering some generic vibrational systems. Specifically, for eigenstates of the harmonic oscillator and a superposition of coherent states, the singular value spectrum is evaluated analytically. Numerical computations are given for the nonlinear problems (the Morse and double well oscillators, and the Henon-Heiles system). We also discuss the difficulties in implementation of a similar technique for electronic problems.

  1. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  2. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir.

    PubMed

    Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-02-01

    The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.

  3. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  4. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  5. Design of CT reconstruction kernel specifically for clinical lung imaging

    NASA Astrophysics Data System (ADS)

    Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.

    2005-04-01

    In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.

  6. Quality changes in macadamia kernel between harvest and farm-gate.

    PubMed

    Walton, David A; Wallace, Helen M

    2011-02-01

    Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.

  7. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  8. Time delay and distance measurement

    NASA Technical Reports Server (NTRS)

    Abshire, James B. (Inventor); Sun, Xiaoli (Inventor)

    2011-01-01

    A method for measuring time delay and distance may include providing an electromagnetic radiation carrier frequency and modulating one or more of amplitude, phase, frequency, polarization, and pointing angle of the carrier frequency with a return to zero (RZ) pseudo random noise (PN) code. The RZ PN code may have a constant bit period and a pulse duration that is less than the bit period. A receiver may detect the electromagnetic radiation and calculate the scattering profile versus time (or range) by computing a cross correlation function between the recorded received signal and a three-state RZ PN code kernel in the receiver. The method also may be used for pulse delay time (i.e., PPM) communications.

  9. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  10. Measurement of the modulation transfer function of x-ray scintillators via heterodyne speckles (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Manfredda, Michele; Giglio, Marzio

    2016-09-01

    The approach can be seen as the optical transposition of what is done in electronics, when a system is fed with a white noise (the input signal autocorrelation is a Diract-delta) and the autocorrelation of the the output signal is then taken, thus yielding the Point Spread Function (PSF) of the system (which is the Fourier Transform of the MTF). In the realm of optics, the tricky task consists in the generation and handling of such a suitable random noise, which must be produced via scattering. Ideally, pure 2D white noise (random superposition of sinusoidal intensity modulation at all spatial frequencies in all the diractions) would be produced by ideal point-like scatterers illuminated with completely coherent radiation: interference between scattered waves would generate high-frequency fringes, realizing the sought noise signal. Practically, limited scatterer size and limited coherence properties of radiation introduce a limitation in the spatial bandwidth of the illuminating field. Whereas information about particle-size effect can be promptly obtained from the form factor of the sample used, which is very well known in the case of spherical particles, the information about beam coherence, in general, is usally not known with adequate accuracy, especially at the x-ray wavelengths. In the particular configuration used, speckles are produced by interfering the scattered waves with the strong transmitted beam, (heterodyne speckles), contrarily to the very common case where speckles are produced by the mutual interference between scattered waves (without any transmitted beam acting as local oscillator) (homodyne speckles). In the end the use of an heterodyne speckle field, thanks to its self-referencing scheme, allows to gather, at a fixed distance, response curves spanning a wide range of wavevectors. By crossing the info from curves acquired at few distances (e.g. 2-3) , it is possible to experimentally separate the contribution of spurious effects (such as limited coherence), in order to identify the spectral component, due to the response of the test system, which is the responsible of the broadening of the optical input signal.

  11. Born scattering of long-period body waves

    NASA Astrophysics Data System (ADS)

    Dalkolmo, Jörg; Friederich, Wolfgang

    2000-09-01

    The Born approximation is applied to the modelling of the propagation of deeply turning long-period body waves through heterogeneities in the lowermost mantle. We use an exact Green's function for a spherically symmetric earth model that also satisfies the appropriate boundary conditions at internal boundaries and the surface of the earth. The scattered displacement field is obtained by a numerical quadrature of the product of the Green's function, the exciting wavefield and structural perturbations. We study three examples: scattering of long-period P waves from a plume rising from the core-mantle boundary (CMB), generation of long-period precursors to PKIKP by strong, localized scatterers at the CMB, and propagation of core-diffracted P waves through large-scale heterogeneities in D''. The main results are as follows: (1) the signals scattered from a realistic plume are small with relative amplitudes of less than 2 per cent at a period of 20s, rendering plume detection a fairly difficult task; (2) strong heterogeneities at the CMB of appropriate size may produce observable long-period precursors to PKIKP in spite of the presence of a diffraction from the PKP-B caustic; (3) core-diffracted P waves (Pdiff) are sensitive to structure in D'' far off the geometrical ray path and also far beyond the entry and exit points of the ray into and out of D'' sensitivity kernels exhibit ring-shaped patterns of alternating sign reminiscent of Fresnel zones; (4) Pdiff also shows a non-negligible sensitivity to shear wave velocity in D'' (5) down to periods of 40s, the Born approximation is sufficiently accurate to allow waveform modelling of Pdiff through large-scale heterogeneities in D'' of up to 5 per cent.

  12. Entanglement and quantum superposition induced by a single photon

    NASA Astrophysics Data System (ADS)

    Lü, Xin-You; Zhu, Gui-Lei; Zheng, Li-Li; Wu, Ying

    2018-03-01

    We predict the occurrence of single-photon-induced entanglement and quantum superposition in a hybrid quantum model, introducing an optomechanical coupling into the Rabi model. Originally, it comes from the photon-dependent quantum property of the ground state featured by the proposed hybrid model. It is associated with a single-photon-induced quantum phase transition, and is immune to the A2 term of the spin-field interaction. Moreover, the obtained quantum superposition state is actually a squeezed cat state, which can significantly enhance precision in quantum metrology. This work offers an approach to manipulate entanglement and quantum superposition with a single photon, which might have potential applications in the engineering of new single-photon quantum devices, and also fundamentally broaden the regime of cavity QED.

  13. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  14. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  15. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    PubMed

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  16. Estimating Concentrations of Road-Salt Constituents in Highway-Runoff from Measurements of Specific Conductance

    USGS Publications Warehouse

    Granato, Gregory E.; Smith, Kirk P.

    1999-01-01

    Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for contributions of constituents other than calcium, sodium, and chloride in dilute waters. The adjusted superposition method also accounts for the attenuation of each constituent's contribution to conductance as ionic strength increases. Use of the adjusted superposition method generally reduced predictive error to within measurement error throughout the range of specific conductance (from 37 to 51,500 ?S/cm) in the highway runoff samples. The effects of pH, temperature, and organic constituents on the relation between concentrations of dissolved constituents and measured specific conductance were examined but these properties did not substantially affect interpretation of the Route 25 data set. Predictive abilities of the adjusted superposition method were similar to results obtained by standard regression techniques, but the adjusted superposition method has several advantages. Adjusted superposition can be applied using available published data about the constituents in precipitation, highway runoff, and the deicing chemicals applied to a highway. This semi-empirical method can be used as a predictive and diagnostic tool before a substantial number of samples are collected, but the power of the regression method is based upon a large number of water-quality analyses that may be affected by a bias in the data.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rothstein, Ira Z.; Stewart, Iain W.

    Starting with QCD, we derive an effective field theory description for forward scattering and factorization violation as part of the soft-collinear effective field theory (SCET) for high energy scattering. These phenomena are mediated by long distance Glauber gluon exchanges, which are static in time, localized in the longitudinal distance, and act as a kernel for forward scattering where |t| << s. In hard scattering, Glauber gluons can induce corrections which invalidate factorization. With SCET, Glauber exchange graphs can be calculated explicitly, and are distinct from graphs involving soft, collinear, or ultrasoft gluons. We derive a complete basis of operators whichmore » describe the leading power effects of Glauber exchange. Key ingredients include regulating light-cone rapidity singularities and subtractions which prevent double counting. Our results include a novel all orders gauge invariant pure glue soft operator which appears between two collinear rapidity sectors. The 1-gluon Feynman rule for the soft operator coincides with the Lipatov vertex, but it also contributes to emissions with ≥ 2 soft gluons. Our Glauber operator basis is derived using tree level and one-loop matching calculations from full QCD to both SCET II and SCET I. The one-loop amplitude’s rapidity renormalization involves mixing of color octet operators and yields gluon Reggeization at the amplitude level. The rapidity renormalization group equation for the leading soft and collinear functions in the forward scattering cross section are each given by the BFKL equation. Various properties of Glauber gluon exchange in the context of both forward scattering and hard scattering factorization are described. For example, we derive an explicit rule for when eikonalization is valid, and provide a direct connection to the picture of multiple Wilson lines crossing a shockwave. In hard scattering operators Glauber subtractions for soft and collinear loop diagrams ensure that we are not sensitive to the directions for soft and collinear Wilson lines. Conversely, certain Glauber interactions can be absorbed into these soft and collinear Wilson lines by taking them to be in specific directions. Finally, we also discuss criteria for factorization violation.« less

  18. Generalized Rayleigh scattering. I. Basic theory.

    NASA Astrophysics Data System (ADS)

    Ivanov, V. V.

    1995-11-01

    The classsical problem of multiple molecular (in particular, Rayleigh) scattering in plane-parallel atmospheres is considered from a somewhat broader viewpoint than usual. The general approach and ideology are borrowed from non-LTE line formation theory. The main emphasis is on the depth dependence of the corresponding source matrix rather than on the emergent radiation. We study the azimuth-averaged radiation field of polarized radiation in a semi-infinite atmosphere with embedded primary sources. The corresponding 2x2 phase matrix of molecular scattering is P=(1-W) P_I_+W P_R_, where P_I_ and P_R_ are the phase matrices of the scalar isotropic scattering and of the Rayleigh scattering, respectively, and W is the depolarization parameter. Contrary to the usual assumption that W{in}[0,1], we assume W{in} [0,{infinity}) and call this generalized Rayleigh scattering (GRS). Using the factorization of P which is intimately related to its diadic expansion, we reduce the problem to an integral equation for the source matrix S(τ) with a matrix displacement kernel. In operator form this equation is S={LAMBDA}S+S^*^, where {LAMBDA} is the matrix {LAMBDA}-operator and S^*^ is the primary source term. This leads to a new concept, the matrix albedo of single scattering λ =diag(λ_I_,λ_Q_), where λ_I_ is the usual (scalar) single scattering albedo and λ_Q_=0.7Wλ_I_. Its use enables one to formulate matrix equivalents of many of the results of the scalar theory in exactly the same form as in the scalar case. Of crucial importance is the matrix equivalent of the sqrt(ɛ) law of the scalar theory. Another useful new concept is the λ-plane, i.e., the plane with the axes (λ_I_,λ_Q_). Systematic use of the matrix sqrt(ɛ) law and of the λ-plane proved to be a useful instrument in classifying various limiting and particular cases of GRS and in discussing numerical data on the matrix source functions (to be given in Paper II of the series).

  19. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir

    PubMed Central

    Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-01-01

    Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061

  20. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  1. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Study on Energy Productivity Ratio (EPR) at palm kernel oil processing factory: case study on PT-X at Sumatera Utara Plantation

    NASA Astrophysics Data System (ADS)

    Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.

    2018-02-01

    The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.

  3. Electronic polarization effect on low-frequency infrared and Raman spectra of aprotic solvent: Molecular dynamics simulation study with charge response kernel by second order Møller-Plesset perturbation method

    NASA Astrophysics Data System (ADS)

    Isegawa, Miho; Kato, Shigeki

    2007-12-01

    Low-frequency infrared (IR) and depolarized Raman scattering (DRS) spectra of acetonitrile, methylene chloride, and acetone liquids are simulated via molecular dynamics calculations with the charge response kernel (CRK) model obtained at the second order Møller-Plesset perturbation (MP2) level. For this purpose, the analytical second derivative technique for the MP2 energy is employed to evaluate the CRK matrices. The calculated IR spectra reasonably agree with the experiments. In particular, the agreement is excellent for acetone because the present CRK model well reproduces the experimental polarizability in the gas phase. The importance of interaction induced dipole moments in characterizing the spectral shapes is stressed. The DRS spectrum of acetone is mainly discussed because the experimental spectrum is available only for this molecule. The calculated spectrum is close to the experiment. The comparison of the present results with those by the multiple random telegraph model is also made. By decomposing the polarizability anisotropy time correlation function to the contributions from the permanent, induced polarizability and their cross term, a discrepancy from the previous calculations is observed in the sign of permanent-induce cross term contribution. The origin of this discrepancy is discussed by analyzing the correlation functions for acetonitrile.

  4. Understanding the large-distance behavior of transverse-momentum-dependent parton densities and the Collins-Soper evolution kernel

    NASA Astrophysics Data System (ADS)

    Collins, John; Rogers, Ted

    2015-04-01

    There is considerable controversy about the size and importance of nonperturbative contributions to the evolution of transverse-momentum-dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that nonperturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and nonperturbative. We make a motivated proposal for the parametrization of the nonperturbative part of the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical nonperturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A (bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A (bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell-Yan experiments to measure the Sivers function.

  5. A mathematical deconvolution formulation for superficial dose distribution measurement by Cerenkov light dosimetry.

    PubMed

    Brost, Eric Edward; Watanabe, Yoichi

    2018-06-01

    Cerenkov photons are created by high-energy radiation beams used for radiation therapy. In this study, we developed a Cerenkov light dosimetry technique to obtain a two-dimensional dose distribution in a superficial region of medium from the images of Cerenkov photons by using a deconvolution method. An integral equation was derived to represent the Cerenkov photon image acquired by a camera for a given incident high-energy photon beam by using convolution kernels. Subsequently, an equation relating the planar dose at a depth to a Cerenkov photon image using the well-known relationship between the incident beam fluence and the dose distribution in a medium was obtained. The final equation contained a convolution kernel called the Cerenkov dose scatter function (CDSF). The CDSF function was obtained by deconvolving the Cerenkov scatter function (CSF) with the dose scatter function (DSF). The GAMOS (Geant4-based Architecture for Medicine-Oriented Simulations) Monte Carlo particle simulation software was used to obtain the CSF and DSF. The dose distribution was calculated from the Cerenkov photon intensity data using an iterative deconvolution method with the CDSF. The theoretical formulation was experimentally evaluated by using an optical phantom irradiated by high-energy photon beams. The intensity of the deconvolved Cerenkov photon image showed linear dependence on the dose rate and the photon beam energy. The relative intensity showed a field size dependence similar to the beam output factor. Deconvolved Cerenkov images showed improvement in dose profiles compared with the raw image data. In particular, the deconvolution significantly improved the agreement in the high dose gradient region, such as in the penumbra. Deconvolution with a single iteration was found to provide the most accurate solution of the dose. Two-dimensional dose distributions of the deconvolved Cerenkov images agreed well with the reference distributions for both square fields and a multileaf collimator (MLC) defined, irregularly shaped field. The proposed technique improved the accuracy of the Cerenkov photon dosimetry in the penumbra region. The results of this study showed initial validation of the deconvolution method for beam profile measurements in a homogeneous media. The new formulation accounted for the physical processes of Cerenkov photon transport in the medium more accurately than previously published methods. © 2018 American Association of Physicists in Medicine.

  6. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sales, J. S.; Silva, L. F. da; Almeida, N. G. de

    2011-03-15

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  7. Accuracy of a teleported squeezed coherent-state superposition trapped into a high-Q cavity

    NASA Astrophysics Data System (ADS)

    Sales, J. S.; da Silva, L. F.; de Almeida, N. G.

    2011-03-01

    We propose a scheme to teleport a superposition of squeezed coherent states from one mode of a lossy cavity to one mode of a second lossy cavity. Based on current experimental capabilities, we present a calculation of the fidelity demonstrating that accurate quantum teleportation can be achieved for some parameters of the squeezed coherent states superposition. The signature of successful quantum teleportation is present in the negative values of the Wigner function.

  8. The Statistics of Radio Astronomical Polarimetry: Disjoint, Superposed, and Composite Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straten, W. van; Tiburzi, C., E-mail: willem.van.straten@aut.ac.nz

    2017-02-01

    A statistical framework is presented for the study of the orthogonally polarized modes of radio pulsar emission via the covariances between the Stokes parameters. To accommodate the typically heavy-tailed distributions of single-pulse radio flux density, the fourth-order joint cumulants of the electric field are used to describe the superposition of modes with arbitrary probability distributions. The framework is used to consider the distinction between superposed and disjoint modes, with particular attention to the effects of integration over finite samples. If the interval over which the polarization state is estimated is longer than the timescale for switching between two or moremore » disjoint modes of emission, then the modes are unresolved by the instrument. The resulting composite sample mean exhibits properties that have been attributed to mode superposition, such as depolarization. Because the distinction between disjoint modes and a composite sample of unresolved disjoint modes depends on the temporal resolution of the observing instrumentation, the arguments in favor of superposed modes of pulsar emission are revisited, and observational evidence for disjoint modes is described. In principle, the four-dimensional covariance matrix that describes the distribution of sample mean Stokes parameters can be used to distinguish between disjoint modes, superposed modes, and a composite sample of unresolved disjoint modes. More comprehensive and conclusive interpretation of the covariance matrix requires more detailed consideration of various relevant phenomena, including temporally correlated subpulse modulation (e.g., jitter), statistical dependence between modes (e.g., covariant intensities and partial coherence), and multipath propagation effects (e.g., scintillation and scattering).« less

  9. GPU-Based Point Cloud Superpositioning for Structural Comparisons of Protein Binding Sites.

    PubMed

    Leinweber, Matthias; Fober, Thomas; Freisleben, Bernd

    2018-01-01

    In this paper, we present a novel approach to solve the labeled point cloud superpositioning problem for performing structural comparisons of protein binding sites. The solution is based on a parallel evolution strategy that operates on large populations and runs on GPU hardware. The proposed evolution strategy reduces the likelihood of getting stuck in a local optimum of the multimodal real-valued optimization problem represented by labeled point cloud superpositioning. The performance of the GPU-based parallel evolution strategy is compared to a previously proposed CPU-based sequential approach for labeled point cloud superpositioning, indicating that the GPU-based parallel evolution strategy leads to qualitatively better results and significantly shorter runtimes, with speed improvements of up to a factor of 1,500 for large populations. Binary classification tests based on the ATP, NADH, and FAD protein subsets of CavBase, a database containing putative binding sites, show average classification rate improvements from about 92 percent (CPU) to 96 percent (GPU). Further experiments indicate that the proposed GPU-based labeled point cloud superpositioning approach can be superior to traditional protein comparison approaches based on sequence alignments.

  10. Coherent superposition of propagation-invariant laser beams

    NASA Astrophysics Data System (ADS)

    Soskind, R.; Soskind, M.; Soskind, Y. G.

    2012-10-01

    The coherent superposition of propagation-invariant laser beams represents an important beam-shaping technique, and results in new beam shapes which retain the unique property of propagation invariance. Propagation-invariant laser beam shapes depend on the order of the propagating beam, and include Hermite-Gaussian and Laguerre-Gaussian beams, as well as the recently introduced Ince-Gaussian beams which additionally depend on the beam ellipticity parameter. While the superposition of Hermite-Gaussian and Laguerre-Gaussian beams has been discussed in the past, the coherent superposition of Ince-Gaussian laser beams has not received significant attention in literature. In this paper, we present the formation of propagation-invariant laser beams based on the coherent superposition of Hermite-Gaussian, Laguerre-Gaussian, and Ince-Gaussian beams of different orders. We also show the resulting field distributions of the superimposed Ince-Gaussian laser beams as a function of the ellipticity parameter. By changing the beam ellipticity parameter, we compare the various shapes of the superimposed propagation-invariant laser beams transitioning from Laguerre-Gaussian beams at one ellipticity extreme to Hermite-Gaussian beams at the other extreme.

  11. Bäcklund transformations for the Boussinesq equation and merging solitons

    NASA Astrophysics Data System (ADS)

    Rasin, Alexander G.; Schiff, Jeremy

    2017-08-01

    The Bäcklund transformation (BT) for the ‘good’ Boussinesq equation and its superposition principles are presented and applied. Unlike other standard integrable equations, the Boussinesq equation does not have a strictly algebraic superposition principle for 2 BTs, but it does for 3. We present this and discuss associated lattice systems. Applying the BT to the trivial solution generates both standard solitons and what we call ‘merging solitons’—solutions in which two solitary waves (with related speeds) merge into a single one. We use the superposition principles to generate a variety of interesting solutions, including superpositions of a merging soliton with 1 or 2 regular solitons, and solutions that develop a singularity in finite time which then disappears at a later finite time. We prove a Wronskian formula for the solutions obtained by applying a general sequence of BTs on the trivial solution. Finally, we obtain the standard conserved quantities of the Boussinesq equation from the BT, and show how the hierarchy of local symmetries follows in a simple manner from the superposition principle for 3 BTs.

  12. A modified homotopy perturbation method and the axial secular frequencies of a non-linear ion trap.

    PubMed

    Doroudi, Alireza

    2012-01-01

    In this paper, a modified version of the homotopy perturbation method, which has been applied to non-linear oscillations by V. Marinca, is used for calculation of axial secular frequencies of a non-linear ion trap with hexapole and octopole superpositions. The axial equation of ion motion in a rapidly oscillating field of an ion trap can be transformed to a Duffing-like equation. With only octopole superposition the resulted non-linear equation is symmetric; however, in the presence of hexapole and octopole superpositions, it is asymmetric. This modified homotopy perturbation method is used for solving the resulting non-linear equations. As a result, the ion secular frequencies as a function of non-linear field parameters are obtained. The calculated secular frequencies are compared with the results of the homotopy perturbation method and the exact results. With only hexapole superposition, the results of this paper and the homotopy perturbation method are the same and with hexapole and octopole superpositions, the results of this paper are much more closer to the exact results compared with the results of the homotopy perturbation method.

  13. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Yongbin; White, R. D.

    In the calculation of the linearized Boltzmann collision operator for an inverse-square force law interaction (Coulomb interaction) F(r)=κ/r{sup 2}, we found the widely used scattering angle cutoff θ≥θ{sub min} is a wrong practise since the divergence still exists after the cutoff has been made. When the correct velocity change cutoff |v′−v|≥δ{sub min} is employed, the scattering angle can be integrated. A unified linearized Boltzmann collision operator for both inverse-square force law and rigid-sphere interactions is obtained. Like many other unified quantities such as transition moments, Fokker-Planck expansion coefficients and energy exchange rates obtained recently [Y. B. Chang and L. A.more » Viehland, AIP Adv. 1, 032128 (2011)], the difference between the two kinds of interactions is characterized by a parameter, γ, which is 1 for rigid-sphere interactions and −3 for inverse-square force law interactions. When the cutoff is removed by setting δ{sub min}=0, Hilbert's well known kernel for rigid-sphere interactions is recovered for γ = 1.« less

  15. Multi-Regge kinematics and the moduli space of Riemann spheres with marked points

    DOE PAGES

    Del Duca, Vittorio; Druc, Stefan; Drummond, James; ...

    2016-08-25

    We show that scattering amplitudes in planar N = 4 Super Yang-Mills in multi-Regge kinematics can naturally be expressed in terms of single-valued iterated integrals on the moduli space of Riemann spheres with marked points. As a consequence, scattering amplitudes in this limit can be expressed as convolutions that can easily be computed using Stokes’ theorem. We apply this framework to MHV amplitudes to leading-logarithmic accuracy (LLA), and we prove that at L loops all MHV amplitudes are determined by amplitudes with up to L + 4 external legs. We also investigate non-MHV amplitudes, and we show that they canmore » be obtained by convoluting the MHV results with a certain helicity flip kernel. We classify all leading singularities that appear at LLA in the Regge limit for arbitrary helicity configurations and any number of external legs. In conclusion, we use our new framework to obtain explicit analytic results at LLA for all MHV amplitudes up to five loops and all non-MHV amplitudes with up to eight external legs and four loops.« less

  16. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  17. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  18. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  19. Superposition Quantification

    NASA Astrophysics Data System (ADS)

    Chang, Li-Na; Luo, Shun-Long; Sun, Yuan

    2017-11-01

    The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182

  20. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. High Resolution Deformation Time Series Estimation for Distributed Scatterers Using Terrasar-X Data

    NASA Astrophysics Data System (ADS)

    Goel, K.; Adam, N.

    2012-07-01

    In recent years, several SAR satellites such as TerraSAR-X, COSMO-SkyMed and Radarsat-2 have been launched. These satellites provide high resolution data suitable for sophisticated interferometric applications. With shorter repeat cycles, smaller orbital tubes and higher bandwidth of the satellites; deformation time series analysis of distributed scatterers (DSs) is now supported by a practical data basis. Techniques for exploiting DSs in non-urban (rural) areas include the Small Baseline Subset Algorithm (SBAS). However, it involves spatial phase unwrapping, and phase unwrapping errors are typically encountered in rural areas and are difficult to detect. In addition, the SBAS technique involves a rectangular multilooking of the differential interferograms to reduce phase noise, resulting in a loss of resolution and superposition of different objects on ground. In this paper, we introduce a new approach for deformation monitoring with a focus on DSs, wherein, there is no need to unwrap the differential interferograms and the deformation is mapped at object resolution. It is based on a robust object adaptive parameter estimation using single look differential interferograms, where, the local tilts of deformation velocity and local slopes of residual DEM in range and azimuth directions are estimated. We present here the technical details and a processing example of this newly developed algorithm.

  2. Image recovery by removing stochastic artefacts identified as local asymmetries

    NASA Astrophysics Data System (ADS)

    Osterloh, K.; Bücherl, T.; Zscherpel, U.; Ewert, U.

    2012-04-01

    Stochastic artefacts are frequently encountered in digital radiography and tomography with neutrons. Most obviously, they are caused by ubiquitous scattered radiation hitting the CCD-sensor. They appear as scattered dots and, at higher frequency of occurrence, they may obscure the image. Some of these dotted interferences vary with time, however, a large portion of them remains persistent so the problem cannot be resolved by collecting stacks of images and to merge them to a median image. The situation becomes even worse in computed tomography (CT) where each artefact causes a circular pattern in the reconstructed plane. Therefore, these stochastic artefacts have to be removed completely and automatically while leaving the original image content untouched. A simplified image acquisition and artefact removal tool was developed at BAM and is available to interested users. Furthermore, an algorithm complying with all the requirements mentioned above was developed that reliably removes artefacts that could even exceed the size of a single pixel without affecting other parts of the image. It consists of an iterative two-step algorithm adjusting pixel values within a 3 × 3 matrix inside of a 5 × 5 kernel and the centre pixel only within a 3 × 3 kernel, resp. It has been applied to thousands of images obtained from the NECTAR facility at the FRM II in Garching, Germany, without any need of a visual control. In essence, the procedure consists of identifying and tackling asymmetric intensity distributions locally with recording each treatment of a pixel. Searching for the local asymmetry with subsequent correction rather than replacing individually identified pixels constitutes the basic idea of the algorithm. The efficiency of the proposed algorithm is demonstrated with a severely spoiled example of neutron radiography and tomography as compared with median filtering, the most convenient alternative approach by visual check, histogram and power spectra analysis.

  3. A novel approach to EPID-based 3D volumetric dosimetry for IMRT and VMAT QA

    NASA Astrophysics Data System (ADS)

    Alhazmi, Abdulaziz; Gianoli, Chiara; Neppl, Sebastian; Martins, Juliana; Veloza, Stella; Podesta, Mark; Verhaegen, Frank; Reiner, Michael; Belka, Claus; Parodi, Katia

    2018-06-01

    Intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are relatively complex treatment delivery techniques and require quality assurance (QA) procedures. Pre-treatment dosimetric verification represents a fundamental QA procedure in daily clinical routine in radiation therapy. The purpose of this study is to develop an EPID-based approach to reconstruct a 3D dose distribution as imparted to a virtual cylindrical water phantom to be used for plan-specific pre-treatment dosimetric verification for IMRT and VMAT plans. For each depth, the planar 2D dose distributions acquired in air were back-projected and convolved by depth-specific scatter and attenuation kernels. The kernels were obtained by making use of scatter and attenuation models to iteratively estimate the parameters from a set of reference measurements. The derived parameters served as a look-up table for reconstruction of arbitrary measurements. The summation of the reconstructed 3D dose distributions resulted in the integrated 3D dose distribution of the treatment delivery. The accuracy of the proposed approach was validated in clinical IMRT and VMAT plans by means of gamma evaluation, comparing the reconstructed 3D dose distributions with Octavius measurement. The comparison was carried out using (3%, 3 mm) criteria scoring 99% and 96% passing rates for IMRT and VMAT, respectively. An accuracy comparable to the one of the commercial device for 3D volumetric dosimetry was demonstrated. In addition, five IMRT and five VMAT were validated against the 3D dose calculation performed by the TPS in a water phantom using the same passing rate criteria. The median passing rates within the ten treatment plans was 97.3%, whereas the lowest was 95%. Besides, the reconstructed 3D distribution is obtained without predictions relying on forward dose calculation and without external phantom or dosimetric devices. Thus, the approach provides a fully automated, fast and easy QA procedure for plan-specific pre-treatment dosimetric verification.

  4. SU-E-T-236: Deconvolution of the Total Nuclear Cross-Sections of Therapeutic Protons and the Characterization of the Reaction Channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulmer, W.

    2015-06-15

    Purpose: The knowledge of the total nuclear cross-section Qtot(E) of therapeutic protons Qtot(E) provides important information in advanced radiotherapy with protons, such as the decrease of fluence of primary protons, the release of secondary particles (neutrons, protons, deuterons, etc.), and the production of nuclear fragments (heavy recoils), which usually undergo β+/− decay by emission of γ-quanta. Therefore determination of Qtot(E) is an important tool for sophisticated calculation algorithms of dose distributions. This cross-section can be determined by a linear combination of shifted Gaussian kernels and an error-function. The resonances resulting from deconvolutions in the energy space can be associated withmore » typical nuclear reactions. Methods: The described method of the determination of Qtot(E) results from an extension of the Breit-Wigner formula and a rather extended version of the nuclear shell theory to include nuclear correlation effects, clusters and highly excited/virtually excited nuclear states. The elastic energy transfer of protons to nucleons (the quantum numbers of the target nucleus remain constant) can be removed by the mentioned deconvolution. Results: The deconvolution of the term related to the error-function of the type cerf*er((E-ETh)/σerf] is the main contribution to obtain various nuclear reactions as resonances, since the elastic part of energy transfer is removed. The nuclear products of various elements of therapeutic interest like oxygen, calcium are classified and calculated. Conclusions: The release of neutrons is completely underrated, in particular, for low-energy protons. The transport of seconary particles, e.g. cluster formation by deuterium, tritium and α-particles, show an essential contribution to secondary particles, and the heavy recoils, which create γ-quanta by decay reactions, lead to broadening of the scatter profiles. These contributions cannot be accounted for by one single Gaussian kernel for the description of lateral scatter.« less

  5. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  6. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  7. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...

  8. Finite-Length Line Source Superposition Model (FLLSSM)

    NASA Astrophysics Data System (ADS)

    1980-03-01

    A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.

  9. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  10. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  11. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...

  12. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  13. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  14. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  15. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  16. Modeling adaptive kernels from probabilistic phylogenetic trees.

    PubMed

    Nicotra, Luca; Micheli, Alessio

    2009-01-01

    Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.

  17. Aflatoxin and nutrient contents of peanut collected from local market and their processed foods

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Rahmianna, A. A.; Yusnawan, E.

    2018-01-01

    Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.

  18. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  19. Dynamics of associating networks

    NASA Astrophysics Data System (ADS)

    Tang, Shengchang; Habicht, Axel; Wang, Muzhou; Li, Shuaili; Seiffert, Sebastian; Olsen, Bradley

    Associating polymers offer important technological solutions to renewable and self-healing materials, conducting electrolytes for energy storage and transport, and vehicles for cell and protein deliveries. The interplay between polymer topologies and association chemistries warrants new interesting physics from associating networks, yet poses significant challenges to study these systems over a wide range of time and length scales. In a series of studies, we explored self-diffusion mechanisms of associating polymers above the percolation threshold, by combining experimental measurements using forced Rayleigh scattering and analytical insights from a two-state model. Despite the differences in molecular structures, a universal super-diffusion phenomenon is observed when diffusion of molecular species is hindered by dissociation kinetics. The molecular dissociation rate can be used to renormalize shear rheology data, which yields an unprecedented time-temperature-concentration superposition. The obtained shear rheology master curves provide experimental evidence of the relaxation hierarchy in associating networks.

  20. Electrically tunable organic–inorganic hybrid polaritons with monolayer WS2

    PubMed Central

    Flatten, Lucas C.; Coles, David M.; He, Zhengyu; Lidzey, David G.; Taylor, Robert A.; Warner, Jamie H.; Smith, Jason M.

    2017-01-01

    Exciton-polaritons are quasiparticles consisting of a linear superposition of photonic and excitonic states, offering potential for nonlinear optical devices. The excitonic component of the polariton provides a finite Coulomb scattering cross section, such that the different types of exciton found in organic materials (Frenkel) and inorganic materials (Wannier-Mott) produce polaritons with different interparticle interaction strength. A hybrid polariton state with distinct excitons provides a potential technological route towards in situ control of nonlinear behaviour. Here we demonstrate a device in which hybrid polaritons are displayed at ambient temperatures, the excitonic component of which is part Frenkel and part Wannier-Mott, and in which the dominant exciton type can be switched with an applied voltage. The device consists of an open microcavity containing both organic dye and a monolayer of the transition metal dichalcogenide WS2. Our findings offer a perspective for electrically controlled nonlinear polariton devices at room temperature. PMID:28094281

  1. The anisotropic effective damping of thickness-dependent epitaxial Co2FeAl films studied by spin rectification

    NASA Astrophysics Data System (ADS)

    Chen, Zhendong; Kong, Wenwen; Mi, Kui; Chen, Guilin; Zhang, Peng; Fan, Xiaolong; Gao, Cunxu; Xue, Desheng

    2018-03-01

    Epitaxial Co2FeAl films with the thickness varying from 26.4 nm to 4.6 nm were grown on MgO(001) substrates by molecular beam epitaxy. Spin rectification was adopted to study the dynamic magnetic properties of the Co2FeAl films, considering the reported advantages of this technique with high thickness-independent sensitivity on samples. At a fixed microwave frequency, the in-plane angular dependent resonance fields and their linewidths exhibit a superposition of a uniaxial and a fourfold anisotropy for all samples. The results reveal an anisotropic damping behavior of the films. Along in-plane different azimuths of the films, frequency-dependent resonance-field linewidths were investigated. The anisotropic effective damping of the films with the thickness varying from 26.4 nm to 4.6 nm was then analyzed, which is contributed from the two-magnon scattering.

  2. Chiral quantum optics.

    PubMed

    Lodahl, Peter; Mahmoodian, Sahand; Stobbe, Søren; Rauschenbeutel, Arno; Schneeweiss, Philipp; Volz, Jürgen; Pichler, Hannes; Zoller, Peter

    2017-01-25

    Advanced photonic nanostructures are currently revolutionizing the optics and photonics that underpin applications ranging from light technology to quantum-information processing. The strong light confinement in these structures can lock the local polarization of the light to its propagation direction, leading to propagation-direction-dependent emission, scattering and absorption of photons by quantum emitters. The possibility of such a propagation-direction-dependent, or chiral, light-matter interaction is not accounted for in standard quantum optics and its recent discovery brought about the research field of chiral quantum optics. The latter offers fundamentally new functionalities and applications: it enables the assembly of non-reciprocal single-photon devices that can be operated in a quantum superposition of two or more of their operational states and the realization of deterministic spin-photon interfaces. Moreover, engineered directional photonic reservoirs could lead to the development of complex quantum networks that, for example, could simulate novel classes of quantum many-body systems.

  3. Spatial Distortion of Vibration Modes via Magnetic Correlation of Impurities

    NASA Astrophysics Data System (ADS)

    Krasniqi, F. S.; Zhong, Y.; Epp, S. W.; Foucar, L.; Trigo, M.; Chen, J.; Reis, D. A.; Wang, H. L.; Zhao, J. H.; Lemke, H. T.; Zhu, D.; Chollet, M.; Fritz, D. M.; Hartmann, R.; Englert, L.; Strüder, L.; Schlichting, I.; Ullrich, J.

    2018-03-01

    Long wavelength vibrational modes in the ferromagnetic semiconductor Ga0.91 Mn0.09 As are investigated using time resolved x-ray diffraction. At room temperature, we measure oscillations in the x-ray diffraction intensity corresponding to coherent vibrational modes with well-defined wavelengths. When the correlation of magnetic impurities sets in, we observe the transition of the lattice into a disordered state that does not support coherent modes at large wavelengths. Our measurements point toward a magnetically induced broadening of long wavelength vibrational modes in momentum space and their quasilocalization in the real space. More specifically, long wavelength vibrational modes cannot be assigned to a single wavelength but rather should be represented as a superposition of plane waves with different wavelengths. Our findings have strong implications for the phonon-related processes, especially carrier-phonon and phonon-phonon scattering, which govern the electrical conductivity and thermal management of semiconductor-based devices.

  4. The role of magnetic loops in particle acceleration at nearly perpendicular shocks

    NASA Technical Reports Server (NTRS)

    Decker, R. B.

    1993-01-01

    The acceleration of superthermal ions is investigated when a planar shock that is on average nearly perpendicular propagates through a plasma in which the magnetic field is the superposition of a constant uniform component plus a random field of transverse hydromagnetic fluctuations. The importance of the broadband nature of the transverse magnetic fluctuations in mediating ion acceleration at nearly perpendicular shocks is pointed out. Specifically, the fluctuations are composed of short-wavelength components which scatter ions in pitch angle and long-wavelength components which are responsible for a spatial meandering of field lines about the mean field. At nearly perpendicular shocks the field line meandering produces a distribution of transient loops along the shock. As an application of this model, the acceleration of a superthermal monoenergetic population of seed protons at a perpendicular shock is investigated by integrating along the exact phase-space orbits.

  5. Development of activity pencil beam algorithm using measured distribution data of positron emitter nuclei generated by proton irradiation of targets containing (12)C, (16)O, and (40)Ca nuclei in preparation of clinical application.

    PubMed

    Miyatake, Aya; Nishio, Teiji; Ogino, Takashi

    2011-10-01

    The purpose of this study is to develop a new calculation algorithm that is satisfactory in terms of the requirements for both accuracy and calculation time for a simulation of imaging of the proton-irradiated volume in a patient body in clinical proton therapy. The activity pencil beam algorithm (APB algorithm), which is a new technique to apply the pencil beam algorithm generally used for proton dose calculations in proton therapy to the calculation of activity distributions, was developed as a calculation algorithm of the activity distributions formed by positron emitter nuclei generated from target nuclear fragment reactions. In the APB algorithm, activity distributions are calculated using an activity pencil beam kernel. In addition, the activity pencil beam kernel is constructed using measured activity distributions in the depth direction and calculations in the lateral direction. (12)C, (16)O, and (40)Ca nuclei were determined as the major target nuclei that constitute a human body that are of relevance for calculation of activity distributions. In this study, "virtual positron emitter nuclei" was defined as the integral yield of various positron emitter nuclei generated from each target nucleus by target nuclear fragment reactions with irradiated proton beam. Compounds, namely, polyethylene, water (including some gelatin) and calcium oxide, which contain plenty of the target nuclei, were irradiated using a proton beam. In addition, depth activity distributions of virtual positron emitter nuclei generated in each compound from target nuclear fragment reactions were measured using a beam ON-LINE PET system mounted a rotating gantry port (BOLPs-RGp). The measured activity distributions depend on depth or, in other words, energy. The irradiated proton beam energies were 138, 179, and 223 MeV, and measurement time was about 5 h until the measured activity reached the background level. Furthermore, the activity pencil beam data were made using the activity pencil beam kernel, which was composed of the measured depth data and the lateral data including multiple Coulomb scattering approximated by the Gaussian function, and were used for calculating activity distributions. The data of measured depth activity distributions for every target nucleus by proton beam energy were obtained using BOLPs-RGp. The form of the depth activity distribution was verified, and the data were made in consideration of the time-dependent change of the form. Time dependence of an activity distribution form could be represented by two half-lives. Gaussian form of the lateral distribution of the activity pencil beam kernel was decided by the effect of multiple Coulomb scattering. Thus, the data of activity pencil beam involving time dependence could be obtained in this study. The simulation of imaging of the proton-irradiated volume in a patient body using target nuclear fragment reactions was feasible with the developed APB algorithm taking time dependence into account. With the use of the APB algorithm, it was suggested that a system of simulation of activity distributions that has levels of both accuracy and calculation time appropriate for clinical use can be constructed.

  6. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fastmore » superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of algorithms in lung cancer radiotherapy involving small fields. However, further investigation by Monte Carlo simulation is required to confirm our results.« less

  7. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  8. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  9. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  10. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  11. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  12. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  13. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  14. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  15. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  16. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  17. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  18. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  20. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  1. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  2. Towards quantum superposition of a levitated nanodiamond with a NV center

    NASA Astrophysics Data System (ADS)

    Li, Tongcang

    2015-05-01

    Creating large Schrödinger's cat states with massive objects is one of the most challenging goals in quantum mechanics. We have previously achieved an important step of this goal by cooling the center-of-mass motion of a levitated microsphere from room temperature to millikelvin temperatures with feedback cooling. To generate spatial quantum superposition states with an optical cavity, however, requires a very strong quadratic coupling that is difficult to achieve. We proposed to optically trap a nanodiamond with a nitrogen-vacancy (NV) center in vacuum, and generate large spatial superposition states using the NV spin-optomechanical coupling in a strong magnetic gradient field. The large spatial superposition states can be used to study objective collapse theories of quantum mechanics. We have optically trapped nanodiamonds in air and are working towards this goal.

  3. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  4. Multi-Scale Morphological Analysis of Conductance Signals in Vertical Upward Gas-Liquid Two-Phase Flow

    NASA Astrophysics Data System (ADS)

    Lian, Enyang; Ren, Yingyu; Han, Yunfeng; Liu, Weixin; Jin, Ningde; Zhao, Junying

    2016-11-01

    The multi-scale analysis is an important method for detecting nonlinear systems. In this study, we carry out experiments and measure the fluctuation signals from a rotating electric field conductance sensor with eight electrodes. We first use a recurrence plot to recognise flow patterns in vertical upward gas-liquid two-phase pipe flow from measured signals. Then we apply a multi-scale morphological analysis based on the first-order difference scatter plot to investigate the signals captured from the vertical upward gas-liquid two-phase flow loop test. We find that the invariant scaling exponent extracted from the multi-scale first-order difference scatter plot with the bisector of the second-fourth quadrant as the reference line is sensitive to the inhomogeneous distribution characteristics of the flow structure, and the variation trend of the exponent is helpful to understand the process of breakup and coalescence of the gas phase. In addition, we explore the dynamic mechanism influencing the inhomogeneous distribution of the gas phase in terms of adaptive optimal kernel time-frequency representation. The research indicates that the system energy is a factor influencing the distribution of the gas phase and the multi-scale morphological analysis based on the first-order difference scatter plot is an effective method for indicating the inhomogeneous distribution of the gas phase in gas-liquid two-phase flow.

  5. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  7. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  8. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  9. Use of convolution/superposition-based treatment planning system for dose calculations in the kilovoltage energy range

    NASA Astrophysics Data System (ADS)

    Alaei, Parham

    2000-11-01

    A number of procedures in diagnostic radiology and cardiology make use of long exposures to x rays from fluoroscopy units. Adverse effects of these long exposure times on the patients' skin have been documented in recent years. These include epilation, erythema, and, in severe cases, moist desquamation and tissue necrosis. Potential biological effects from these exposures to other organs include radiation-induced cataracts and pneumonitis. Although there have been numerous studies to measure or calculate the dose to skin from these procedures, there have only been a handful of studies to determine the dose to other organs. Therefore, there is a need for accurate methods to measure the dose in tissues and organs other than the skin. This research was concentrated in devising a method to determine accurately the radiation dose to these tissues and organs. The work was performed in several stages: First, a three dimensional (3D) treatment planning system used in radiation oncology was modified and complemented to make it usable with the low energies of x rays used in diagnostic radiology. Using the system for low energies required generation of energy deposition kernels using Monte Carlo methods. These kernels were generated using the EGS4 Monte Carlo system of codes and added to the treatment planning system. Following modification, the treatment planning system was evaluated for its accuracy of calculations in low energies within homogeneous and heterogeneous media. A study of the effects of lungs and bones on the dose distribution was also performed. The next step was the calculation of dose distributions in humanoid phantoms using this modified system. The system was used to calculate organ doses in these phantoms and the results were compared to those obtained from other methods. These dose distributions can subsequently be used to create dose-volume histograms (DVHs) for internal organs irradiated by these beams. Using this data and the concept of normal tissue complication probability (NTCP) developed for radiation oncology, the risk of future complications in a particular organ can be estimated.

  10. Effects of turbulence on the collision rate of cloud droplets

    NASA Astrophysics Data System (ADS)

    Ayala, Orlando

    This dissertation concerns effects of air turbulence on the collision rate of atmospheric cloud droplets. This research was motivated by the speculation that air turbulence could enhance the collision rate thereby help transform cloud droplets to rain droplets in a short time as observed in nature. The air turbulence within clouds is assumed to be homogeneous and isotropic, and its small-scale motion (1 mm to 10 cm scales) is computationally generated by direct numerical integration of the full Navier-Stokes equations. Typical droplet and turbulence parameters of convective warm clouds are used to determine the Stokes numbers (St) and the nondimensional terminal velocities (Sv) which characterize droplet relative inertia and gravitational settling, respectively. A novel and efficient methodology for conducting direct numerical simulations (DNS) of hydrodynamically-interacting droplets in the context of cloud microphysics has been developed. This numerical approach solves the turbulent flow by the pseudo-spectral method with a large-scale forcing, and utilizes an improved superposition method to embed analytically the local, small-scale (10 mum to 1 mm) disturbance flows induced by the droplets. This hybrid representation of background turbulent air motion and the induced disturbance flows is then used to study the combined effects of hydrodynamic interactions and airflow turbulence on the motion and collisions of cloud droplets. Hybrid DNS results show that turbulence can increase the geometric collision kernel relative to the gravitational geometric kernel by as much as 42% due to enhanced radial relative motion and preferential concentration of droplets. The exact level of enhancements depends on the Taylor-microscale Reynolds number, turbulent dissipation rate, and droplet pair size ratio. One important finding is that turbulence has a relatively dominant effect on the collision process between droplets close in size as the gravitational collision mechanism diminishes. A theory was developed to predict the radial relative velocity between droplets at contact. The theory agrees with our DNS results to within 5% for cloud droplets with strong settling. In addition, an empirical model is developed to quantify the radial distribution function. (Abstract shortened by UMI.)

  11. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  12. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  13. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  14. New approximate orientation averaging of the water molecule interacting with the thermal neutron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Markovic, M.I.; Minic, D.M.; Rakic, A.D.

    1992-02-01

    This paper reports that exactly describing the time of thermal neutron collisions with water molecules, orientation averaging is performed by an exact method (EOA{sub k}) and four approximate methods (two well known and two less known). Expressions for the microscopic scattering kernel are developed. The two well-known approximate orientation averaging methods are Krieger-Nelkin (K-N) and Koppel-Young (K-Y). The results obtained by one of the two proposed approximate orientation averaging methods agree best with the corresponding results obtained by EOA{sub k}. The largest discrepancies between the EOA{sub k} results and the results of the approximate methods are obtained using the well-knowmore » K-N approximate orientation averaging method.« less

  15. graphkernels: R and Python packages for graph comparison

    PubMed Central

    Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-01-01

    Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902

  16. Aflatoxin variability in pistachios.

    PubMed Central

    Mahoney, N E; Rodriguez, S B

    1996-01-01

    Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781

  17. graphkernels: R and Python packages for graph comparison.

    PubMed

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  18. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  20. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...

  1. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  2. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  3. 7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Góźdź, A., E-mail: andrzej.gozdz@umcs.lublin.pl; Góźdź, M., E-mail: mgozdz@kft.umcs.lublin.pl

    The theory of neutrino oscillations rests on the assumption, that the interaction basis and the physical (mass) basis of neutrino states are different. Therefore neutrino is produced in a certain welldefined superposition of three mass eigenstates, which propagate separately and may be detected as a different superposition. This is called flavor oscillations. It is, however, not clear why neutrinos behave this way, i.e., what is the underlying mechanism which leads to the production of a superposition of physical states in a single reaction. In this paper we argue, that one of the reasons may be connected with the temporal structuremore » of the process. In order to discuss the role of time in processes on the quantum level, we use a special formulation of the quantum mechanics, which is based on the projection time evolution. We arrive at the conclusion, that for short reaction times the formation of a superposition of states of similar masses is natural.« less

  5. Transient Response of Shells of Revolution by Direct Integration and Modal Superposition Methods

    NASA Technical Reports Server (NTRS)

    Stephens, W. B.; Adelman, H. M.

    1974-01-01

    The results of an analytical effort to obtain and evaluate transient response data for a cylindrical and a conical shell by use of two different approaches: direct integration and modal superposition are described. The inclusion of nonlinear terms is more important than the inclusion of secondary linear effects (transverse shear deformation and rotary inertia) although there are thin-shell structures where these secondary effects are important. The advantages of the direct integration approach are that geometric nonlinear and secondary effects are easy to include and high-frequency response may be calculated. In comparison to the modal superposition technique the computer storage requirements are smaller. The advantages of the modal superposition approach are that the solution is independent of the previous time history and that once the modal data are obtained, the response for repeated cases may be efficiently computed. Also, any admissible set of initial conditions can be applied.

  6. Experimental superposition of orders of quantum gates

    PubMed Central

    Procopio, Lorenzo M.; Moqanaki, Amir; Araújo, Mateus; Costa, Fabio; Alonso Calafell, Irati; Dowd, Emma G.; Hamel, Deny R.; Rozema, Lee A.; Brukner, Časlav; Walther, Philip

    2015-01-01

    Quantum computers achieve a speed-up by placing quantum bits (qubits) in superpositions of different states. However, it has recently been appreciated that quantum mechanics also allows one to ‘superimpose different operations'. Furthermore, it has been shown that using a qubit to coherently control the gate order allows one to accomplish a task—determining if two gates commute or anti-commute—with fewer gate uses than any known quantum algorithm. Here we experimentally demonstrate this advantage, in a photonic context, using a second qubit to control the order in which two gates are applied to a first qubit. We create the required superposition of gate orders by using additional degrees of freedom of the photons encoding our qubits. The new resource we exploit can be interpreted as a superposition of causal orders, and could allow quantum algorithms to be implemented with an efficiency unlikely to be achieved on a fixed-gate-order quantum computer. PMID:26250107

  7. Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.

    2017-03-01

    Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.

  8. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  9. Credit scoring analysis using kernel discriminant

    NASA Astrophysics Data System (ADS)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  10. Unified Heat Kernel Regression for Diffusion, Kernel Smoothing and Wavelets on Manifolds and Its Application to Mandible Growth Modeling in CT Images

    PubMed Central

    Chung, Moo K.; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K.

    2014-01-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel regression is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. Unlike many previous partial differential equation based approaches involving diffusion, our approach represents the solution of diffusion analytically, reducing numerical inaccuracy and slow convergence. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, we have applied the method in characterizing the localized growth pattern of mandible surfaces obtained in CT images from subjects between ages 0 and 20 years by regressing the length of displacement vectors with respect to the template surface. PMID:25791435

  11. Correlation and classification of single kernel fluorescence hyperspectral data with aflatoxin concentration in corn kernels inoculated with Aspergillus flavus spores.

    PubMed

    Yao, H; Hruska, Z; Kincaid, R; Brown, R; Cleveland, T; Bhatnagar, D

    2010-05-01

    The objective of this study was to examine the relationship between fluorescence emissions of corn kernels inoculated with Aspergillus flavus and aflatoxin contamination levels within the kernels. Aflatoxin contamination in corn has been a long-standing problem plaguing the grain industry with potentially devastating consequences to corn growers. In this study, aflatoxin-contaminated corn kernels were produced through artificial inoculation of corn ears in the field with toxigenic A. flavus spores. The kernel fluorescence emission data were taken with a fluorescence hyperspectral imaging system when corn kernels were excited with ultraviolet light. Raw fluorescence image data were preprocessed and regions of interest in each image were created for all kernels. The regions of interest were used to extract spectral signatures and statistical information. The aflatoxin contamination level of single corn kernels was then chemically measured using affinity column chromatography. A fluorescence peak shift phenomenon was noted among different groups of kernels with different aflatoxin contamination levels. The fluorescence peak shift was found to move more toward the longer wavelength in the blue region for the highly contaminated kernels and toward the shorter wavelengths for the clean kernels. Highly contaminated kernels were also found to have a lower fluorescence peak magnitude compared with the less contaminated kernels. It was also noted that a general negative correlation exists between measured aflatoxin and the fluorescence image bands in the blue and green regions. The correlation coefficients of determination, r(2), was 0.72 for the multiple linear regression model. The multivariate analysis of variance found that the fluorescence means of four aflatoxin groups, <1, 1-20, 20-100, and >or=100 ng g(-1) (parts per billion), were significantly different from each other at the 0.01 level of alpha. Classification accuracy under a two-class schema ranged from 0.84 to 0.91 when a threshold of either 20 or 100 ng g(-1) was used. Overall, the results indicate that fluorescence hyperspectral imaging may be applicable in estimating aflatoxin content in individual corn kernels.

  12. Three-dimensional waveform sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Marquering, Henk; Nolet, Guust; Dahlen, F. A.

    1998-03-01

    The sensitivity of intermediate-period (~10-100s) seismic waveforms to the lateral heterogeneity of the Earth is computed using an efficient technique based upon surface-wave mode coupling. This formulation yields a general, fully fledged 3-D relationship between data and model without imposing smoothness constraints on the lateral heterogeneity. The calculations are based upon the Born approximation, which yields a linear relation between data and model. The linear relation ensures fast forward calculations and makes the formulation suitable for inversion schemes; however, higher-order effects such as wave-front healing are neglected. By including up to 20 surface-wave modes, we obtain Fréchet, or sensitivity, kernels for waveforms in the time frame that starts at the S arrival and which includes direct and surface-reflected body waves. These 3-D sensitivity kernels provide new insights into seismic-wave propagation, and suggest that there may be stringent limitations on the validity of ray-theoretical interpretations. Even recently developed 2-D formulations, which ignore structure out of the source-receiver plane, differ substantially from our 3-D treatment. We infer that smoothness constraints on heterogeneity, required to justify the use of ray techniques, are unlikely to hold in realistic earth models. This puts the use of ray-theoretical techniques into question for the interpretation of intermediate-period seismic data. The computed 3-D sensitivity kernels display a number of phenomena that are counter-intuitive from a ray-geometrical point of view: (1) body waves exhibit significant sensitivity to structure up to 500km away from the source-receiver minor arc; (2) significant near-surface sensitivity above the two turning points of the SS wave is observed; (3) the later part of the SS wave packet is most sensitive to structure away from the source-receiver path; (4) the sensitivity of the higher-frequency part of the fundamental surface-wave mode is wider than for its faster, lower-frequency part; (5) delayed body waves may considerably influence fundamental Rayleigh and Love waveforms. The strong sensitivity of waveforms to crustal structure due to fundamental-mode-to-body-wave scattering precludes the use of phase-velocity filters to model body-wave arrivals. Results from the 3-D formulation suggest that the use of 2-D and 1-D techniques for the interpretation of intermediate-period waveforms should seriously be reconsidered.

  13. Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach

    NASA Astrophysics Data System (ADS)

    Kotaru, Appala Raju; Joshi, Ramesh C.

    Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.

  14. Intraear Compensation of Field Corn, Zea mays, from Simulated and Naturally Occurring Injury by Ear-Feeding Larvae.

    PubMed

    Steckel, S; Stewart, S D

    2015-06-01

    Ear-feeding larvae, such as corn earworm, Helicoverpa zea Boddie (Lepidoptera: Noctuidae), can be important insect pests of field corn, Zea mays L., by feeding on kernels. Recently introduced, stacked Bacillus thuringiensis (Bt) traits provide improved protection from ear-feeding larvae. Thus, our objective was to evaluate how injury to kernels in the ear tip might affect yield when this injury was inflicted at the blister and milk stages. In 2010, simulated corn earworm injury reduced total kernel weight (i.e., yield) at both the blister and milk stage. In 2011, injury to ear tips at the milk stage affected total kernel weight. No differences in total kernel weight were found in 2013, regardless of when or how much injury was inflicted. Our data suggested that kernels within the same ear could compensate for injury to ear tips by increasing in size, but this increase was not always statistically significant or sufficient to overcome high levels of kernel injury. For naturally occurring injury observed on multiple corn hybrids during 2011 and 2012, our analyses showed either no or a minimal relationship between number of kernels injured by ear-feeding larvae and the total number of kernels per ear, total kernel weight, or the size of individual kernels. The results indicate that intraear compensation for kernel injury to ear tips can occur under at least some conditions. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. A novel algorithm for the calculation of physical and biological irradiation quantities in scanned ion beam therapy: the beamlet superposition approach

    NASA Astrophysics Data System (ADS)

    Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.

    2016-01-01

    The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.

  16. Light scattering and absorption by space weathered planetary bodies: Novel numerical solution

    NASA Astrophysics Data System (ADS)

    Markkanen, Johannes; Väisänen, Timo; Penttilä, Antti; Muinonen, Karri

    2017-10-01

    Airless planetary bodies are exposed to space weathering, i.e., energetic electromagnetic and particle radiation, implantation and sputtering from solar wind particles, and micrometeorite bombardment.Space weathering is known to alter the physical and chemical composition of the surface of an airless body (C. Pieters et al., J. Geophys. Res. Planets, 121, 2016). From the light scattering perspective, one of the key effects is the production of nanophase iron (npFe0) near the exposed surfaces (B. Hapke, J. Geophys. Res., 106, E5, 2001). At visible and ultraviolet wavelengths these particles have a strong electromagnetic response which has a major impact on scattering and absorption features. Thus, to interpret the spectroscopic observations of space-weathered asteroids, the model should treat the contributions of the npFe0 particles rigorously.Our numerical approach is based on the hierarchical geometric optics (GO) and radiative transfer (RT). The modelled asteroid is assumed to consist of densely packed silicate grains with npFe0 inclusions. We employ our recently developed RT method for dense random media (K. Muinonen, et al., Radio Science, submitted, 2017) to compute the contributions of the npFe0 particles embedded in silicate grains. The dense media RT method requires computing interactions of the npFe0 particles in the volume element for which we use the exact fast superposition T-matrix method (J. Markkanen, and A.J. Yuffa, JQSRT 189, 2017). Reflections and refractions on the grain surface and propagation in the grain are addressed by the GO. Finally, the standard RT is applied to compute scattering by the entire asteroid.Our numerical method allows for a quantitative interpretation of the spectroscopic observations of space-weathered asteroids. In addition, it may be an important step towards more rigorous a thermophysical model of asteroids when coupled with the radiative and conductive heat transfer techniques.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL. Computational resources provided by CSC- IT Centre for Science Ltd, Finland.

  17. Evidence-based Kernels: Fundamental Units of Behavioral Influence

    PubMed Central

    Biglan, Anthony

    2008-01-01

    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior. PMID:18712600

  18. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  19. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  20. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  1. The single scattering properties of soot aggregates with concentric core-shell spherical monomers

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Cheng, Tianhai; Gu, Xingfa; Zheng, Lijuan; Chen, Hao; Xu, Hui

    2014-03-01

    Anthropogenic soot aerosols are shown as complex, fractal-like aggregated structures with high light absorption efficiency. In atmospheric environment, soot monomers may tend to acquire a weakly absorbing coating, such as an organic coating, which introduces further complexity to the optical properties of the aggregates. The single scattering properties of soot aggregates can be significantly influenced by the coated status of these kinds of aerosols. In this article, the monomers of fractal soot aggregates are modelled as semi-external mixtures (physical contact) with constant radius of soot core and variable sizes of the coating for specific soot volume fractions. The single scattering properties of these coated soot particles, such as phase function, the cross sections of extinction and absorption, single scattering albedo (SSA) and asymmetry parameter (ASY), are calculated using the numerically exact superposition T-matrix method. The random-orientation averaging results have shown that the single scattering properties of these coated soot aggregates are significantly different from the single volume-equivalent core-shell sphere approximation using the Mie theory and the homogeneous aggregates with uncoated monomers using the effective medium theory, such as Maxwell-Garnett and Bruggemann approximations, which overestimate backscattering of coated soot. It is found that the SSA and cross sections of extinction and absorption are increased for soot aggregates with thicker weakly absorbing coating on the monomers. Especially, the SSA values of these simulated aggregates with less soot core volume fractions are remarkably (~50% for core volume fraction of soot aggregates of 0.5, ~100% for a core volume fraction of 0.2, at 0.67 μm) larger than for uncoated soot particles without consideration of coating. Moreover, the cross sections of extinction and absorption are underestimated by the computation of equivalent homogeneous fractal aggregate approximation (within 5% for the T-matrix method and 10-25% for the Rayleigh-Debye-Gans approximation due to different soot volume fractions). Further understanding of the optical properties of these coated soot aggregates would be helpful for both environment monitoring and climate studies.

  2. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  3. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  4. 21 CFR 182.40 - Natural extractives (solvent-free) used in conjunction with spices, seasonings, and flavorings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... source Apricot kernel (persic oil) Prunus armeniaca L. Peach kernel (persic oil) Prunus persica Sieb. et Zucc. Peanut stearine Arachis hypogaea L. Persic oil (see apricot kernel and peach kernel) Quince seed...

  5. Wigner functions defined with Laplace transform kernels.

    PubMed

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  6. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  7. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  8. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    PubMed

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the kernel fraction was redried at 60°C for 48 h in a forced-air oven and dry sieved to determine GMPS and surface area. Linear relationships between CSPS from WPCS (n=80) and kernel fraction GMPS, surface area, and proportion passing through the 4.75-mm screen were poor. Strong quadratic relationships between proportion of kernel fraction passing through the 4.75-mm screen and kernel fraction GMPS and surface area were observed. These findings suggest that hydrodynamic separation and dry sieving of the kernel fraction may provide a better assessment of kernel breakage in WPCS than CSPS. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  10. Influence of Kernel Age on Fumonisin B1 Production in Maize by Fusarium moniliforme

    PubMed Central

    Warfield, Colleen Y.; Gilchrist, David G.

    1999-01-01

    Production of fumonisins by Fusarium moniliforme on naturally infected maize ears is an important food safety concern due to the toxic nature of this class of mycotoxins. Assessing the potential risk of fumonisin production in developing maize ears prior to harvest requires an understanding of the regulation of toxin biosynthesis during kernel maturation. We investigated the developmental-stage-dependent relationship between maize kernels and fumonisin B1 production by using kernels collected at the blister (R2), milk (R3), dough (R4), and dent (R5) stages following inoculation in culture at their respective field moisture contents with F. moniliforme. Highly significant differences (P ≤ 0.001) in fumonisin B1 production were found among kernels at the different developmental stages. The highest levels of fumonisin B1 were produced on the dent stage kernels, and the lowest levels were produced on the blister stage kernels. The differences in fumonisin B1 production among kernels at the different developmental stages remained significant (P ≤ 0.001) when the moisture contents of the kernels were adjusted to the same level prior to inoculation. We concluded that toxin production is affected by substrate composition as well as by moisture content. Our study also demonstrated that fumonisin B1 biosynthesis on maize kernels is influenced by factors which vary with the developmental age of the tissue. The risk of fumonisin contamination may begin early in maize ear development and increases as the kernels reach physiological maturity. PMID:10388675

  11. Decoherence as a way to measure extremely soft collisions with dark matter

    NASA Astrophysics Data System (ADS)

    Riedel, C. Jess; Yavin, Itay

    2017-07-01

    A new frontier in the search for dark matter (DM) is based on the idea of detecting the decoherence caused by DM scattering against a mesoscopic superposition of normal matter. Such superpositions are uniquely sensitive to very small momentum transfers from new particles and forces, especially DM with a mass below 100 MeV. Here we investigate what sorts of dark sectors are inaccessible with existing methods but would induce noticeable decoherence in the next generation of matter interferometers. We show that very soft but medium range (0.1 nm - 1 μ m ) elastic interactions between nuclei and DM are particularly suitable. We construct toy models for such interactions, discuss existing constraints, and delineate the expected sensitivity of forthcoming experiments. The first hints of DM in these devices would appear as small variations in the anomalous decoherence rate with a period of one sidereal day. This is a generic signature of interstellar sources of decoherence, clearly distinguishing it from terrestrial backgrounds. The OTIMA experiment under development in Vienna will begin to probe Earth-thermalizing DM once sidereal variations in the background decoherence rate are pushed below one part in a hundred for superposed 5-nm gold nanoparticles. The proposals by Bateman et al. and Geraci et al. could be similarly sensitive although they would require at least a month of data taking. DM that is absorbed or elastically reflected by the Earth, and so avoids a greenhouse density enhancement, would not be detectable by those three experiments. On the other hand, the aggressive proposals of the MAQRO collaboration and Pino et al. would immediately open up many orders of magnitude in DM mass, interaction range, and coupling strength, regardless of how DM behaves in bulk matter.

  12. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  13. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  14. Design of a multiple kernel learning algorithm for LS-SVM by convex programming.

    PubMed

    Jian, Ling; Xia, Zhonghang; Liang, Xijun; Gao, Chuanhou

    2011-06-01

    As a kernel based method, the performance of least squares support vector machine (LS-SVM) depends on the selection of the kernel as well as the regularization parameter (Duan, Keerthi, & Poo, 2003). Cross-validation is efficient in selecting a single kernel and the regularization parameter; however, it suffers from heavy computational cost and is not flexible to deal with multiple kernels. In this paper, we address the issue of multiple kernel learning for LS-SVM by formulating it as semidefinite programming (SDP). Furthermore, we show that the regularization parameter can be optimized in a unified framework with the kernel, which leads to an automatic process for model selection. Extensive experimental validations are performed and analyzed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Novel near-infrared sampling apparatus for single kernel analysis of oil content in maize.

    PubMed

    Janni, James; Weinstock, B André; Hagen, Lisa; Wright, Steve

    2008-04-01

    A method of rapid, nondestructive chemical and physical analysis of individual maize (Zea mays L.) kernels is needed for the development of high value food, feed, and fuel traits. Near-infrared (NIR) spectroscopy offers a robust nondestructive method of trait determination. However, traditional NIR bulk sampling techniques cannot be applied successfully to individual kernels. Obtaining optimized single kernel NIR spectra for applied chemometric predictive analysis requires a novel sampling technique that can account for the heterogeneous forms, morphologies, and opacities exhibited in individual maize kernels. In this study such a novel technique is described and compared to less effective means of single kernel NIR analysis. Results of the application of a partial least squares (PLS) derived model for predictive determination of percent oil content per individual kernel are shown.

  16. SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livadiotis, G., E-mail: glivadiotis@swri.edu

    2016-03-15

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe themore » observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubrovsky, V. G.; Topovsky, A. V.

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums ofmore » special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.« less

  18. Superposition of Polytropes in the Inner Heliosheath

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2016-03-01

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.

  19. On sufficient statistics of least-squares superposition of vector sets.

    PubMed

    Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M

    2015-06-01

    The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.

  20. Understanding the large-distance behavior of transverse-momentum-dependent parton densities and the Collins-Soper evolution kernel

    DOE PAGES

    Collins, John; Rogers, Ted

    2015-04-01

    There is considerable controversy about the size and importance of non-perturbative contributions to the evolution of transverse momentum dependent (TMD) parton distribution functions. Standard fits to relatively high-energy Drell-Yan data give evolution that when taken to lower Q is too rapid to be consistent with recent data in semi-inclusive deeply inelastic scattering. Some authors provide very different forms for TMD evolution, even arguing that non-perturbative contributions at large transverse distance bT are not needed or are irrelevant. Here, we systematically analyze the issues, both perturbative and non-perturbative. We make a motivated proposal for the parameterization of the non-perturbative part ofmore » the TMD evolution kernel that could give consistency: with the variety of apparently conflicting data, with theoretical perturbative calculations where they are applicable, and with general theoretical non-perturbative constraints on correlation functions at large distances. We propose and use a scheme- and scale-independent function A(bT) that gives a tool to compare and diagnose different proposals for TMD evolution. We also advocate for phenomenological studies of A(bT) as a probe of TMD evolution. The results are important generally for applications of TMD factorization. In particular, they are important to making predictions for proposed polarized Drell- Yan experiments to measure the Sivers function.« less

Top