GPU implementation of prior image constrained compressed sensing (PICCS)
NASA Astrophysics Data System (ADS)
Nett, Brian E.; Tang, Jie; Chen, Guang-Hong
2010-04-01
The Prior Image Constrained Compressed Sensing (PICCS) algorithm (Med. Phys. 35, pg. 660, 2008) has been applied to several computed tomography applications with both standard CT systems and flat-panel based systems designed for guiding interventional procedures and radiation therapy treatment delivery. The PICCS algorithm typically utilizes a prior image which is reconstructed via the standard Filtered Backprojection (FBP) reconstruction algorithm. The algorithm then iteratively solves for the image volume that matches the measured data, while simultaneously assuring the image is similar to the prior image. The PICCS algorithm has demonstrated utility in several applications including: improved temporal resolution reconstruction, 4D respiratory phase specific reconstructions for radiation therapy, and cardiac reconstruction from data acquired on an interventional C-arm. One disadvantage of the PICCS algorithm, just as other iterative algorithms, is the long computation times typically associated with reconstruction. In order for an algorithm to gain clinical acceptance reconstruction must be achievable in minutes rather than hours. In this work the PICCS algorithm has been implemented on the GPU in order to significantly reduce the reconstruction time of the PICCS algorithm. The Compute Unified Device Architecture (CUDA) was used in this implementation.
Monte Carlo Simulations of Radiative and Neutrino Transport under Astrophysical Conditions
NASA Astrophysics Data System (ADS)
Krivosheyev, Yu. M.; Bisnovatyi-Kogan, G. S.
2018-05-01
Monte Carlo simulations are utilized to model radiative and neutrino transfer in astrophysics. An algorithm that can be used to study radiative transport in astrophysical plasma based on simulations of photon trajectories in a medium is described. Formation of the hard X-ray spectrum of the Galactic microquasar SS 433 is considered in detail as an example. Specific requirements for applying such simulations to neutrino transport in a densemedium and algorithmic differences compared to its application to photon transport are discussed.
Cloud Properties and Radiative Heating Rates for TWP
Comstock, Jennifer
2013-11-07
A cloud properties and radiative heating rates dataset is presented where cloud properties retrieved using lidar and radar observations are input into a radiative transfer model to compute radiative fluxes and heating rates at three ARM sites located in the Tropical Western Pacific (TWP) region. The cloud properties retrieval is a conditional retrieval that applies various retrieval techniques depending on the available data, that is if lidar, radar or both instruments detect cloud. This Combined Remote Sensor Retrieval Algorithm (CombRet) produces vertical profiles of liquid or ice water content (LWC or IWC), droplet effective radius (re), ice crystal generalized effective size (Dge), cloud phase, and cloud boundaries. The algorithm was compared with 3 other independent algorithms to help estimate the uncertainty in the cloud properties, fluxes, and heating rates (Comstock et al. 2013). The dataset is provided at 2 min temporal and 90 m vertical resolution. The current dataset is applied to time periods when the MMCR (Millimeter Cloud Radar) version of the ARSCL (Active Remotely-Sensed Cloud Locations) Value Added Product (VAP) is available. The MERGESONDE VAP is utilized where temperature and humidity profiles are required. Future additions to this dataset will utilize the new KAZR instrument and its associated VAPs.
Satellite Imagery Analysis for Nighttime Temperature Inversion Clouds
NASA Technical Reports Server (NTRS)
Kawamoto, K.; Minnis, P.; Arduini, R.; Smith, W., Jr.
2001-01-01
Clouds play important roles in the climate system. Their optical and microphysical properties, which largely determine their radiative property, need to be investigated. Among several measurement means, satellite remote sensing seems to be the most promising. Since most of the cloud algorithms proposed so far are daytime use which utilizes solar radiation, Minnis et al. (1998) developed a nighttime use one using 3.7-, 11 - and 12-microns channels. Their algorithm, however, has a drawback that is not able to treat temperature inversion cases. We update their algorithm, incorporating new parameterization by Arduini et al. (1999) which is valid for temperature inversion cases. This updated algorithm has been applied to GOES satellite data and reasonable retrieval results were obtained.
SMMR Simulator radiative transfer calibration model. 2: Algorithm development
NASA Technical Reports Server (NTRS)
Link, S.; Calhoon, C.; Krupp, B.
1980-01-01
Passive microwave measurements performed from Earth orbit can be used to provide global data on a wide range of geophysical and meteorological phenomena. A Scanning Multichannel Microwave Radiometer (SMMR) is being flown on the Nimbus-G satellite. The SMMR Simulator duplicates the frequency bands utilized in the spacecraft instruments through an amalgamate of radiometer systems. The algorithm developed utilizes data from the fall 1978 NASA CV-990 Nimbus-G underflight test series and subsequent laboratory testing.
Radiation detector device for rejecting and excluding incomplete charge collection events
Bolotnikov, Aleksey E.; De Geronimo, Gianluigi; Vernon, Emerson; Yang, Ge; Camarda, Giuseppe; Cui, Yonggang; Hossain, Anwar; Kim, Ki Hyun; James, Ralph B.
2016-05-10
A radiation detector device is provided that is capable of distinguishing between full charge collection (FCC) events and incomplete charge collection (ICC) events based upon a correlation value comparison algorithm that compares correlation values calculated for individually sensed radiation detection events with a calibrated FCC event correlation function. The calibrated FCC event correlation function serves as a reference curve utilized by a correlation value comparison algorithm to determine whether a sensed radiation detection event fits the profile of the FCC event correlation function within the noise tolerances of the radiation detector device. If the radiation detection event is determined to be an ICC event, then the spectrum for the ICC event is rejected and excluded from inclusion in the radiation detector device spectral analyses. The radiation detector device also can calculate a performance factor to determine the efficacy of distinguishing between FCC and ICC events.
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.
2006-01-01
Retrieving surface longwave radiation from space has been a difficult task since the surface downwelling longwave radiation (SDLW) are integrations from radiation emitted by the entire atmosphere, while those emitted from the upper atmosphere are absorbed before reaching the surface. It is particularly problematic when thick clouds are present since thick clouds will virtually block all the longwave radiation from above, while satellites observe atmosphere emissions mostly from above the clouds. Zhou and Cess developed an algorithm for retrieving SDLW based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for areas that were covered with ice clouds. An improved version of the algorithm was developed that prevents the large errors in the SDLW at low water vapor amounts. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths measured from the Cloud and the Earth's Radiant Energy System (CERES) satellites to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for the Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing. It will be incorporated in the CERES project as one of the empirical surface radiation algorithms.
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.
2007-01-01
Zhou and Cess [2001] developed an algorithm for retrieving surface downwelling longwave radiation (SDLW) based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for scenes that were covered with ice clouds. An improved version of the algorithm prevents the large errors in the SDLW at low water vapor amounts by taking into account that under such conditions the SDLW and water vapor amount are nearly linear in their relationship. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths available from the Cloud and the Earth's Radiant Energy System (CERES) single scanner footprint (SSF) product to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing and will be incorporated as one of the CERES empirical surface radiation algorithms.
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
NASA Technical Reports Server (NTRS)
Yost, Christopher R.; Minnis, Patrick; Trepte, Qing Z.; Palikonda, Rabindra; Ayers, Jeffrey K.; Spangenberg, Doulas A.
2012-01-01
With geostationary satellite data it is possible to have a continuous record of diurnal cycles of cloud properties for a large portion of the globe. Daytime cloud property retrieval algorithms are typically superior to nighttime algorithms because daytime methods utilize measurements of reflected solar radiation. However, reflected solar radiation is difficult to accurately model for high solar zenith angles where the amount of incident radiation is small. Clear and cloudy scenes can exhibit very small differences in reflected radiation and threshold-based cloud detection methods have more difficulty setting the proper thresholds for accurate cloud detection. Because top-of-atmosphere radiances are typically more accurately modeled outside the terminator region, information from previous scans can help guide cloud detection near the terminator. This paper presents an algorithm that uses cloud fraction and clear and cloudy infrared brightness temperatures from previous satellite scan times to improve the performance of a threshold-based cloud mask near the terminator. Comparisons of daytime, nighttime, and terminator cloud fraction derived from Geostationary Operational Environmental Satellite (GOES) radiance measurements show that the algorithm greatly reduces the number of false cloud detections and smoothes the transition from the daytime to the nighttime clod detection algorithm. Comparisons with the Geoscience Laser Altimeter System (GLAS) data show that using this algorithm decreases the number of false detections by approximately 20 percentage points.
Chapter 13. Exploring Use of the Reserved Core
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holmen, John; Humphrey, Alan; Berzins, Martin
2015-07-29
In this chapter, we illustrate benefits of thinking in terms of thread management techniques when using a centralized scheduler model along with interoperability of MPI and PThread. This is facilitated through an exploration of thread placement strategies for an algorithm modeling radiative heat transfer with special attention to the 61st core. This algorithm plays a key role within the Uintah Computational Framework (UCF) and current efforts taking place at the University of Utah to model next-generation, large-scale clean coal boilers. In such simulations, this algorithm models the dominant form of heat transfer and consumes a large portion of compute time.more » Exemplified by a real-world example, this chapter presents our early efforts in porting a key portion of a scalability-centric codebase to the Intel Xeon Phi coprocessor. Specifically, this chapter presents results from our experiments profiling the native execution of a reverse Monte-Carlo ray tracing-based radiation model on a single coprocessor. These results demonstrate that our fastest run configurations utilized the 61st core and that performance was not profoundly impacted when explicitly oversubscribing the coprocessor operating system thread. Additionally, this chapter presents a portion of radiation model source code, a MIC-centric UCF cross-compilation example, and less conventional thread management technique for developers utilizing the PThreads threading model.« less
A new scanning device in CT with dose reduction potential
NASA Astrophysics Data System (ADS)
Tischenko, Oleg; Xu, Yuan; Hoeschen, Christoph
2006-03-01
The amount of x-ray radiation currently applied in CT practice is not utilized optimally. A portion of radiation traversing the patient is either not detected at all or is used ineffectively. The reason lies partly in the reconstruction algorithms and partly in the geometry of the CT scanners designed specifically for these algorithms. In fact, the reconstruction methods widely used in CT are intended to invert the data that correspond to ideal straight lines. However, the collection of such data is often not accurate due to likely movement of the source/detector system of the scanner in the time interval during which all the detectors are read. In this paper, a new design of the scanner geometry is proposed that is immune to the movement of the CT system and will collect all radiation traversing the patient. The proposed scanning design has a potential to reduce the patient dose by a factor of two. Furthermore, it can be used with the existing reconstruction algorithm and it is particularly suitable for OPED, a new robust reconstruction algorithm.
Introductory Tools for Radiative Transfer Models
NASA Astrophysics Data System (ADS)
Feldman, D.; Kuai, L.; Natraj, V.; Yung, Y.
2006-12-01
Satellite data are currently so voluminous that, despite their unprecedented quality and potential for scientific application, only a small fraction is analyzed due to two factors: researchers' computational constraints and a relatively small number of researchers actively utilizing the data. Ultimately it is hoped that the terabytes of unanalyzed data being archived can receive scientific scrutiny but this will require a popularization of the methods associated with the analysis. Since a large portion of complexity is associated with the proper implementation of the radiative transfer model, it is reasonable and appropriate to make the model as accessible as possible to general audiences. Unfortunately, the algorithmic and conceptual details that are necessary for state-of-the-art analysis also tend to frustrate the accessibility for those new to remote sensing. Several efforts have been made to have web- based radiative transfer calculations, and these are useful for limited calculations, but analysis of more than a few spectra requires the utilization of home- or server-based computing resources. We present a system that is designed to allow for easier access to radiative transfer models with implementation on a home computing platform in the hopes that this system can be utilized in and expanded upon in advanced high school and introductory college settings. This learning-by-doing process is aided through the use of several powerful tools. The first is a wikipedia-style introduction to the salient features of radiative transfer that references the seminal works in the field and refers to more complicated calculations and algorithms sparingly5. The second feature is a technical forum, commonly referred to as a tiki-wiki, that addresses technical and conceptual questions through public postings, private messages, and a ranked searching routine. Together, these tools may be able to facilitate greater interest in the field of remote sensing.
Detection of alpha radiation in a beta radiation field
Mohagheghi, Amir H.; Reese, Robert P.
2001-01-01
An apparatus and method for detecting alpha particles in the presence of high activities of beta particles utilizing an alpha spectrometer. The apparatus of the present invention utilizes a magnetic field applied around the sample in an alpha spectrometer to deflect the beta particles from the sample prior to reaching the detector, thus permitting detection of low concentrations of alpha particles. In the method of the invention, the strength of magnetic field required to adequately deflect the beta particles and permit alpha particle detection is given by an algorithm that controls the field strength as a function of sample beta energy and the distance of the sample to the detector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata; Rao, Nageswara S; Wu, Qishi
There have been increasingly large deployments of radiation detection networks that require computationally fast algorithms to produce prompt results over ad-hoc sub-networks of mobile devices, such as smart-phones. These algorithms are in sharp contrast to complex network algorithms that necessitate all measurements to be sent to powerful central servers. In this work, at individual sensors, we employ Wald-statistic based detection algorithms which are computationally very fast, and are implemented as one of three Z-tests and four chi-square tests. At fusion center, we apply the K-out-of-N fusion to combine the sensors hard decisions. We characterize the performance of detection methods bymore » deriving analytical expressions for the distributions of underlying test statistics, and by analyzing the fusion performances in terms of K, N, and the false-alarm rates of individual detectors. We experimentally validate our methods using measurements from indoor and outdoor characterization tests of the Intelligence Radiation Sensors Systems (IRSS) program. In particular, utilizing the outdoor measurements, we construct two important real-life scenarios, boundary surveillance and portal monitoring, and present the results of our algorithms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Harding, Lee T.
Isotope identification algorithms that are contained in the Gamma Detector Response and Analysis Software (GADRAS) can be used for real-time stationary measurement and search applications on platforms operating under Linux or Android operating sys-tems. Since the background radiation can vary considerably due to variations in natu-rally-occurring radioactive materials (NORM), spectral algorithms can be substantial-ly more sensitive to threat materials than search algorithms based strictly on count rate. Specific isotopes or interest can be designated for the search algorithm, which permits suppression of alarms for non-threatening sources, such as such as medical radionuclides. The same isotope identification algorithms that are usedmore » for search ap-plications can also be used to process static measurements. The isotope identification algorithms follow the same protocols as those used by the Windows version of GADRAS, so files that are created under the Windows interface can be copied direct-ly to processors on fielded sensors. The analysis algorithms contain provisions for gain adjustment and energy lineariza-tion, which enables direct processing of spectra as they are recorded by multichannel analyzers. Gain compensation is performed by utilizing photopeaks in background spectra. Incorporation of this energy calibration tasks into the analysis algorithm also eliminates one of the more difficult challenges associated with development of radia-tion detection equipment.« less
ERBE Geographic Scene and Monthly Snow Data
NASA Technical Reports Server (NTRS)
Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.
1997-01-01
The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.
Use of MODIS Sensor Images Combined with Reanalysis Products to Retrieve Net Radiation in Amazonia
de Oliveira, Gabriel; Brunsell, Nathaniel A.; Moraes, Elisabete C.; Bertani, Gabriel; dos Santos, Thiago V.; Shimabukuro, Yosio E.; Aragão, Luiz E. O. C.
2016-01-01
In the Amazon region, the estimation of radiation fluxes through remote sensing techniques is hindered by the lack of ground measurements required as input in the models, as well as the difficulty to obtain cloud-free images. Here, we assess an approach to estimate net radiation (Rn) and its components under all-sky conditions for the Amazon region through the Surface Energy Balance Algorithm for Land (SEBAL) model utilizing only remote sensing and reanalysis data. The study period comprised six years, between January 2001–December 2006, and images from MODIS sensor aboard the Terra satellite and GLDAS reanalysis products were utilized. The estimates were evaluated with flux tower measurements within the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) project. Comparison between estimates obtained by the proposed method and observations from LBA towers showed errors between 12.5% and 16.4% and 11.3% and 15.9% for instantaneous and daily Rn, respectively. Our approach was adequate to minimize the problem related to strong cloudiness over the region and allowed to map consistently the spatial distribution of net radiation components in Amazonia. We conclude that the integration of reanalysis products and satellite data, eliminating the need for surface measurements as input model, was a useful proposition for the spatialization of the radiation fluxes in the Amazon region, which may serve as input information needed by algorithms that aim to determine evapotranspiration, the most important component of the Amazon hydrological balance. PMID:27347957
Use of MODIS Sensor Images Combined with Reanalysis Products to Retrieve Net Radiation in Amazonia.
de Oliveira, Gabriel; Brunsell, Nathaniel A; Moraes, Elisabete C; Bertani, Gabriel; Dos Santos, Thiago V; Shimabukuro, Yosio E; Aragão, Luiz E O C
2016-06-24
In the Amazon region, the estimation of radiation fluxes through remote sensing techniques is hindered by the lack of ground measurements required as input in the models, as well as the difficulty to obtain cloud-free images. Here, we assess an approach to estimate net radiation (Rn) and its components under all-sky conditions for the Amazon region through the Surface Energy Balance Algorithm for Land (SEBAL) model utilizing only remote sensing and reanalysis data. The study period comprised six years, between January 2001-December 2006, and images from MODIS sensor aboard the Terra satellite and GLDAS reanalysis products were utilized. The estimates were evaluated with flux tower measurements within the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) project. Comparison between estimates obtained by the proposed method and observations from LBA towers showed errors between 12.5% and 16.4% and 11.3% and 15.9% for instantaneous and daily Rn, respectively. Our approach was adequate to minimize the problem related to strong cloudiness over the region and allowed to map consistently the spatial distribution of net radiation components in Amazonia. We conclude that the integration of reanalysis products and satellite data, eliminating the need for surface measurements as input model, was a useful proposition for the spatialization of the radiation fluxes in the Amazon region, which may serve as input information needed by algorithms that aim to determine evapotranspiration, the most important component of the Amazon hydrological balance.
NASA Astrophysics Data System (ADS)
Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.
2016-12-01
Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.
Arbuthnot, Mary; Mooney, David P
2017-01-01
It is crucial to identify cervical spine injuries while minimizing ionizing radiation. This study analyzes the sensitivity and negative predictive value of a pediatric cervical spine clearance algorithm. We performed a retrospective review of all children <21years old who were admitted following blunt trauma and underwent cervical spine clearance utilizing our institution's cervical spine clearance algorithm over a 10-year period. Age, gender, International Classification of Diseases 9th Edition diagnosis codes, presence or absence of cervical collar on arrival, Injury Severity Score, and type of cervical spine imaging obtained were extracted from the trauma registry and electronic medical record. Descriptive statistics were used and the sensitivity and negative predictive value of the algorithm were calculated. Approximately 125,000 children were evaluated in the Emergency Department and 11,331 were admitted. Of the admitted children, 1023 patients arrived in a cervical collar without advanced cervical spine imaging and were evaluated using the cervical spine clearance algorithm. Algorithm sensitivity was 94.4% and the negative predictive value was 99.9%. There was one missed injury, a spinous process tip fracture in a teenager maintained in a collar. Our algorithm was associated with a low missed injury rate and low CT utilization rate, even in children <3years old. IV. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Han, Xiao; Pearson, Erik; Pelizzari, Charles; Al-Hallaq, Hania; Sidky, Emil Y.; Bian, Junguo; Pan, Xiaochuan
2015-06-01
Kilo-voltage (KV) cone-beam computed tomography (CBCT) unit mounted onto a linear accelerator treatment system, often referred to as on-board imager (OBI), plays an increasingly important role in image-guided radiation therapy. While the FDK algorithm is currently used for reconstructing images from clinical OBI data, optimization-based reconstruction has also been investigated for OBI CBCT. An optimization-based reconstruction involves numerous parameters, which can significantly impact reconstruction properties (or utility). The success of an optimization-based reconstruction for a particular class of practical applications thus relies strongly on appropriate selection of parameter values. In the work, we focus on tailoring the constrained-TV-minimization-based reconstruction, an optimization-based reconstruction previously shown of some potential for CBCT imaging conditions of practical interest, to OBI imaging through appropriate selection of parameter values. In particular, for given real data of phantoms and patient collected with OBI CBCT, we first devise utility metrics specific to OBI-quality-assurance tasks and then apply them to guiding the selection of parameter values in constrained-TV-minimization-based reconstruction. The study results show that the reconstructions are with improvement, relative to clinical FDK reconstruction, in both visualization and quantitative assessments in terms of the devised utility metrics.
Farahmand, Farid; Khadivi, Kevin O.; Rodrigues, Joel J. P. C.
2009-01-01
The utility of a novel, high-precision, non-intrusive, wireless, accelerometer-based patient orientation monitoring system (APOMS) in determining orientation change in patients undergoing radiation treatment is reported here. Using this system a small wireless accelerometer sensor is placed on a patient’s skin, broadcasting its orientation to the receiving station connected to a PC in the control area. A threshold-based algorithm is developed to identify the exact amount of the patient’s head orientation change. Through real-time measurements, an audible alarm can alert the radiation therapist if the user-defined orientation threshold is violated. Our results indicate that, in spite of its low-cost and simplicity, the APOMS is highly sensitive and offers accurate measurements. Furthermore, the APOMS is patient friendly, vendor neutral, and requires minimal user training. The versatile architecture of the APOMS makes it potentially suitable for variety of applications, including study of correlation between external and internal markers during Image-Guided Radiation Therapy (IGRT), with no major changes in hardware setup or algorithm. PMID:22423196
Computing NLTE Opacities -- Node Level Parallel Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holladay, Daniel
Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.
Detection with Enhanced Energy Windowing Phase I Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bass, David A.; Enders, Alexander L.
2016-12-01
This document reviews the progress of Phase I of the Detection with Enhanced Energy Windowing (DEEW) project. The DEEW project is the implementation of software incorporating an algorithm which reviews data generated by radiation portal monitors and utilizes advanced and novel techniques for detecting radiological and fissile material while not alarming on Naturally Occurring Radioactive Material. Independent testing indicated that the Enhanced Energy Windowing algorithm showed promise at reducing the probability of alarm in the stream of commerce compared to existing algorithms and other developmental algorithms, while still maintaining adequate sensitivity to threats. This document contains a brief description ofmore » the project, instructions for setting up and running the applications, and guidance to help make reviewing the output files and source code easier.« less
New methodology for adjusting rotating shadowband irradiometer measurements
NASA Astrophysics Data System (ADS)
Vignola, Frank; Peterson, Josh; Wilbert, Stefan; Blanc, Philippe; Geuder, Norbert; Kern, Chris
2017-06-01
A new method is developed for correcting systematic errors found in rotating shadowband irradiometer measurements. Since the responsivity of photodiode-based pyranometers typically utilized for RST sensors is dependent upon the wavelength of the incident radiation and the spectral distribution of the incident radiation is different for the Direct Normal Trradiance and the Diffuse Horizontal Trradiance, spectral effects have to be considered. These cause the most problematic errors when applying currently available correction functions to RST measurements. Hence, direct normal and diffuse contributions are analyzed and modeled separately. An additional advantage of this methodology is that it provides a prescription for how to modify the adjustment algorithms to locations with different atmospheric characteristics from the location where the calibration and adjustment algorithms were developed. A summary of results and areas for future efforts are then discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
KL Gaustad; DD Turner
2007-09-30
This report provides a short description of the Atmospheric Radiation Measurement (ARM) microwave radiometer (MWR) RETrievel (MWRRET) Value-Added Product (VAP) algorithm. This algorithm utilizes complimentary physical and statistical retrieval methods and applies brightness temperature offsets to reduce spurious liquid water path (LWP) bias in clear skies resulting in significantly improved precipitable water vapor (PWV) and LWP retrievals. We present a general overview of the technique, input parameters, output products, and describe data quality checks. A more complete discussion of the theory and results is given in Turner et al. (2007b).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krzyżanowska, A.; Deptuch, G. W.; Maj, P.
This paper presents the detailed characterization of a single photon counting chip, named CHASE Jr., built in a CMOS 40-nm process, operating with synchrotron radiation. The chip utilizes an on-chip implementation of the C8P1 algorithm. The algorithm eliminates the charge sharing related uncertainties, namely, the dependence of the number of registered photons on the discriminator’s threshold, set for monochromatic irradiation, and errors in the assignment of an event to a certain pixel. The article presents a short description of the algorithm as well as the architecture of the CHASE Jr., chip. The analog and digital functionalities, allowing for proper operationmore » of the C8P1 algorithm are described, namely, an offset correction for two discriminators independently, two-stage gain correction, and different operation modes of the digital blocks. The results of tests of the C8P1 operation are presented for the chip bump bonded to a silicon sensor and exposed to the 3.5- μm -wide pencil beam of 8-keV photons of synchrotron radiation. It was studied how sensitive the algorithm performance is to the chip settings, as well as the uniformity of parameters of the analog front-end blocks. Presented results prove that the C8P1 algorithm enables counting all photons hitting the detector in between readout channels and retrieving the actual photon energy.« less
Long Term Cloud Property Datasets From MODIS and AVHRR Using the CERES Cloud Algorithm
NASA Technical Reports Server (NTRS)
Minnis, Patrick; Bedka, Kristopher M.; Doelling, David R.; Sun-Mack, Sunny; Yost, Christopher R.; Trepte, Qing Z.; Bedka, Sarah T.; Palikonda, Rabindra; Scarino, Benjamin R.; Chen, Yan;
2015-01-01
Cloud properties play a critical role in climate change. Monitoring cloud properties over long time periods is needed to detect changes and to validate and constrain models. The Clouds and the Earth's Radiant Energy System (CERES) project has developed several cloud datasets from Aqua and Terra MODIS data to better interpret broadband radiation measurements and improve understanding of the role of clouds in the radiation budget. The algorithms applied to MODIS data have been adapted to utilize various combinations of channels on the Advanced Very High Resolution Radiometer (AVHRR) on the long-term time series of NOAA and MetOp satellites to provide a new cloud climate data record. These datasets can be useful for a variety of studies. This paper presents results of the MODIS and AVHRR analyses covering the period from 1980-2014. Validation and comparisons with other datasets are also given.
Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong
2013-01-07
Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.
Jin, Shuo; Li, Dengwang; Yin, Yong
2013-01-01
Accurate registration of 18F−FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from 18F−FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information‐based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application. PACS numbers: 87.57.nj, 87.57.Q‐, 87.57.uk PMID:23318381
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
2016-10-21
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
Development and deployment of the Collimated Directional Radiation Detection System
NASA Astrophysics Data System (ADS)
Guckes, Amber L.; Barzilov, Alexander
2017-09-01
The Collimated Directional Radiation Detection System (CDRDS) is capable of imaging radioactive sources in two dimensions (as a directional detector). The detection medium of the CDRDS is a single Cs2LiYCl6:Ce3+ scintillator cell enriched in 7Li (CLYC-7). The CLYC-7 is surrounded by a heterogeneous high-density polyethylene (HDPE) and lead (Pb) collimator. These materials make-up a coded aperture inlaid in the collimator. The collimator is rotated 360° by a stepper motor which enables time-encoded imaging of a radioactive source. The CDRDS is capable of spectroscopy and pulse shape discrimination (PSD) of photons and fast neutrons. The measurements of a radioactive source are carried out in discrete time steps that correlate to the angular rotation of the collimator. The measurement results are processed using a maximum likelihood expectation (MLEM) algorithm to create an image of the measured radiation. This collimator design allows for the directional detection of photons and fast neutrons simultaneously by utilizing only one CLYC-7 scintillator. Directional detection of thermal neutrons can also be performed by utilizing another suitable scintillator. Moreover, the CDRDS is portable, robust, and user friendly. This unit is capable of utilizing wireless data transfer for possible radiation mapping and network-centric applications. The CDRDS was tested by performing laboratory measurements with various gamma-ray and neutron sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, David D.; Clough, Shepard A.; Liljegren, James C.
2007-11-01
Ground-based two-channel microwave radiometers have been used for over 15 years by the Atmospheric Radiation Measurement (ARM) program to provide observations of downwelling emitted radiance from which precipitable water vapor (PWV) and liquid water path (LWP) – twp geophysical parameters critical for many areas of atmospheric research – are retrieved. An algorithm that utilizes two advanced retrieval techniques, a computationally expensive physical-iterative approach and an efficient statistical method, has been developed to retrieve these parameters. An important component of this Microwave Retrieval (MWRRET) algorithm is the determination of small (< 1K) offsets that are subtracted from the observed brightness temperaturesmore » before the retrievals are performed. Accounting for these offsets removes systematic biases from the observations and/or the model spectroscopy necessary for the retrieval, significantly reducing the systematic biases in the retrieved LWP. The MWRRET algorithm provides significantly more accurate retrievals than the original ARM statistical retrieval which uses monthly retrieval coefficients. By combining the two retrieval methods with the application of brightness temperature offsets to reduce the spurious LWP bias in clear skies, the MWRRET algorithm provides significantly better retrievals of PWV and LWP from the ARM 2-channel microwave radiometers compared to the original ARM product.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeoh, Kheng-Wei; Mikhaeel, N. George, E-mail: George.Mikhaeel@gstt.nhs.uk
2013-01-01
Fluorine-18 fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) has become indispensable for the clinical management of lymphomas. With consistent evidence that it is more accurate than anatomic imaging in the staging and response assessment of many lymphoma subtypes, its utility continues to increase. There have therefore been efforts to incorporate PET/CT data into radiation therapy decision making and in the planning process. Further, there have also been studies investigating target volume definition for radiation therapy using PET/CT data. This article will critically review the literature and ongoing studies on the above topics, examining the value and methods of adding PET/CTmore » data to the radiation therapy treatment algorithm. We will also discuss the various challenges and the areas where more evidence is required.« less
NASA Technical Reports Server (NTRS)
Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Curtis, Scott; Einaudi, Franco (Technical Monitor)
2001-01-01
Multi-purpose remote-sensing products from various satellites have proved crucial in developing global estimates of precipitation. Examples of these products include low-earth-orbit and geosynchronous-orbit infrared (leo- and geo-IR), Outgoing Longwave Radiation (OLR), Television Infrared Operational Satellite (TIROS) Operational Vertical Sounder (TOVS) data, and passive microwave data such as that from the Special Sensor Microwave/ Imager (SSM/I). Each of these datasets has served as the basis for at least one useful quasi-global precipitation estimation algorithm; however, the quality of estimates varies tremendously among the algorithms for the different climatic regions around the globe.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaustad, KL; Turner, DD
2009-05-30
This report provides a short description of the Atmospheric Radiation Measurement (ARM) Climate Research Facility (ACRF) microwave radiometer (MWR) RETrievel (MWRRET) value-added product (VAP) algorithm. This algorithm utilizes a complementary physical retrieval method and applies brightness temperature offsets to reduce spurious liquid water path (LWP) bias in clear skies resulting in significantly improved precipitable water vapor (PWV) and LWP retrievals. We present a general overview of the technique, input parameters, output products, and describe data quality checks. A more complete discussion of the theory and results is given in Turner et al. (2007b).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaustad, KL; Turner, DD; McFarlane, SA
2011-07-25
This report provides a short description of the Atmospheric Radiation Measurement (ARM) Climate Research Facility microwave radiometer (MWR) Retrieval (MWRRET) value-added product (VAP) algorithm. This algorithm utilizes a complementary physical retrieval method and applies brightness temperature offsets to reduce spurious liquid water path (LWP) bias in clear skies resulting in significantly improved precipitable water vapor (PWV) and LWP retrievals. We present a general overview of the technique, input parameters, output products, and describe data quality checks. A more complete discussion of the theory and results is given in Turner et al. (2007b).
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
Use and limitations of ASHRAE solar algorithms in solar energy utilization studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sowell, E.F.
1978-01-01
Algorithms for computer calculation of solar radiation based on cloud cover data, recommended by the ASHRAE Task Group on Energy Requirements for Buildings, are examined for applicability in solar utilization studies. The implementation is patterned after a well-known computer program, NBSLD. The results of these algorithms, including horizontal and tilted surface insolation and useful energy collectable, are compared to observations and results obtainable by the Liu and Jordan method. For purposes of comparison, data for Riverside, CA from 1960 through 1963 are examined. It is shown that horizontal values so predicted are frequently less than 10% and always less thanmore » 23% in error when compared to averages of hourly measurements during important collection hours in 1962. Average daily errors range from -14 to 9% over the year. When averaged on an hourly basis over four years, there is a 21% maximum discrepancy compared to the Liu and Jordan method. Corresponding tilted-surface discrepancies are slightly higher, as are those for useful energy collected. Possible sources of these discrepancies and errors are discussed. Limitations of the algorithms and various implementations are examined, and it is suggested that certain assumptions acceptable for building loads analysis may not be acceptable for solar utilization studies. In particular, it is shown that the method of separatingg diffuse and direct components in the presence of clouds requires careful consideration in order to achieve accuracy and efficiency in any implementation.« less
High throughput dual-wavelength temperature distribution imaging via compressive imaging
NASA Astrophysics Data System (ADS)
Yao, Xu-Ri; Lan, Ruo-Ming; Liu, Xue-Feng; Zhu, Ge; Zheng, Fu; Yu, Wen-Kai; Zhai, Guang-Jie
2018-03-01
Thermal imaging is an essential tool in a wide variety of research areas. In this work we demonstrate high-throughput double-wavelength temperature distribution imaging using a modified single-pixel camera without the requirement of a beam splitter (BS). A digital micro-mirror device (DMD) is utilized to display binary masks and split the incident radiation, which eliminates the necessity of a BS. Because the spatial resolution is dictated by the DMD, this thermal imaging system has the advantage of perfect spatial registration between the two images, which limits the need for the pixel registration and fine adjustments. Two bucket detectors, which measures the total light intensity reflected from the DMD, are employed in this system and yield an improvement in the detection efficiency of the narrow-band radiation. A compressive imaging algorithm is utilized to achieve under-sampling recovery. A proof-of-principle experiment was presented to demonstrate the feasibility of this structure.
Patient-specific CT dosimetry calculation: a feasibility study.
Fearon, Thomas; Xie, Huchen; Cheng, Jason Y; Ning, Holly; Zhuge, Ying; Miller, Robert W
2011-11-15
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of "standard man". Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient-specific CT dosimetry. A radiation treatment planning system was modified to calculate patient-specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose-volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi-empirical, measured correction-based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point-by-point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%-20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient-specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation.
Morrison, John L.; Stephens, Alan G.; Grover, S. Blaine
2001-11-20
An improved nuclear diagnostic method identifies a contained target material by measuring on-axis, mono-energetic uncollided particle radiation transmitted through a target material for two penetrating radiation beam energies, and applying specially developed algorithms to estimate a ratio of macroscopic neutron cross-sections for the uncollided particle radiation at the two energies, where the penetrating radiation is a neutron beam, or a ratio of linear attenuation coefficients for the uncollided particle radiation at the two energies, where the penetrating radiation is a gamma-ray beam. Alternatively, the measurements are used to derive a minimization formula based on the macroscopic neutron cross-sections for the uncollided particle radiation at the two neutron beam energies, or the linear attenuation coefficients for the uncollided particle radiation at the two gamma-ray beam energies. A candidate target material database, including known macroscopic neutron cross-sections or linear attenuation coefficients for target materials at the selected neutron or gamma-ray beam energies, is used to approximate the estimated ratio or to solve the minimization formula, such that the identity of the contained target material is discovered.
A Test Suite for 3D Radiative Hydrodynamics Simulations of Protoplanetary Disks
NASA Astrophysics Data System (ADS)
Boley, Aaron C.; Durisen, R. H.; Nordlund, A.; Lord, J.
2006-12-01
Radiative hydrodynamics simulations of protoplanetary disks with different treatments for radiative cooling demonstrate disparate evolutions (see Durisen et al. 2006, PPV chapter). Some of these differences include the effects of convection and metallicity on disk cooling and the susceptibility of the disk to fragmentation. Because a principal reason for these differences may be the treatment of radiative cooling, the accuracy of cooling algorithms must be evaluated. In this paper we describe a radiative transport test suite, and we challenge all researchers who use radiative hydrodynamics to study protoplanetary disk evolution to evaluate their algorithms with these tests. The test suite can be used to demonstrate an algorithm's accuracy in transporting the correct flux through an atmosphere and in reaching the correct temperature structure, to test the algorithm's dependence on resolution, and to determine whether the algorithm permits of inhibits convection when expected. In addition, we use this test suite to demonstrate the accuracy of a newly developed radiative cooling algorithm that combines vertical rays with flux-limited diffusion. This research was supported in part by a Graduate Student Researchers Program fellowship.
Satellite estimation of surface spectral ultraviolet irradiance using OMI data in East Asia
NASA Astrophysics Data System (ADS)
Lee, H.; Kim, J.; Jeong, U.
2017-12-01
Due to a strong influence to the human health and ecosystem environment, continuous monitoring of the surface ultraviolet (UV) irradiance is important nowadays. The amount of UVA (320-400 nm) and UVB (290-320 nm) radiation at the Earth surface depends on the extent of Rayleigh scattering by atmospheric gas molecules, the radiative absorption by ozone, radiative scattering by clouds, and both absorption and scattering by airborne aerosols. Thus advanced consideration of these factors is the essential part to establish the process of UV irradiance estimation. Also UV index (UVI) is a simple parameter to show the strength of surface UV irradiance, therefore UVI has been widely utilized for the purpose of UV monitoring. In this study, we estimate surface UV irradiance at East Asia using realistic input based on OMI Total Ozone and reflectivity, and then validate this estimated comparing to UV irradiance from World Ozone and Ultraviolet Radiation Data Centre (WOUDC) data. In this work, we also try to develop our own retrieval algorithm for better estimation of surface irradiance. We use the Vector Linearized Discrete Ordinate Radiative Transfer (VLIDORT) model version 2.6 for our UV irradiance calculation. The input to the VLIDORT radiative transfer calculations are the total ozone column (TOMS V7 climatology), the surface albedo (Herman and Celarier, 1997) and the cloud optical depth. Based on these, the UV irradiance is calculated based on look-up table (LUT) approach. To correct absorbing aerosol, UV irradiance algorithm added climatological aerosol information (Arola et al., 2009). The further study, we analyze the comprehensive uncertainty analysis based on LUT and all input parameters.
Correcting Satellite Image Derived Surface Model for Atmospheric Effects
NASA Technical Reports Server (NTRS)
Emery, William; Baldwin, Daniel
1998-01-01
This project was a continuation of the project entitled "Resolution Earth Surface Features from Repeat Moderate Resolution Satellite Imagery". In the previous study, a Bayesian Maximum Posterior Estimate (BMPE) algorithm was used to obtain a composite series of repeat imagery from the Advanced Very High Resolution Radiometer (AVHRR). The spatial resolution of the resulting composite was significantly greater than the 1 km resolution of the individual AVHRR images. The BMPE algorithm utilized a simple, no-atmosphere geometrical model for the short-wave radiation budget at the Earth's surface. A necessary assumption of the algorithm is that all non geometrical parameters remain static over the compositing period. This assumption is of course violated by temporal variations in both the surface albedo and the atmospheric medium. The effect of the albedo variations is expected to be minimal since the variations are on a fairly long time scale compared to the compositing period, however, the atmospheric variability occurs on a relatively short time scale and can be expected to cause significant errors in the surface reconstruction. The current project proposed to incorporate an atmospheric correction into the BMPE algorithm for the purpose of investigating the effects of a variable atmosphere on the surface reconstructions. Once the atmospheric effects were determined, the investigation could be extended to include corrections various cloud effects, including short wave radiation through thin cirrus clouds. The original proposal was written for a three year project, funded one year at a time. The first year of the project focused on developing an understanding of atmospheric corrections and choosing an appropriate correction model. Several models were considered and the list was narrowed to the two best suited. These were the 5S and 6S shortwave radiation models developed at NASA/GODDARD and tested extensively with data from the AVHRR instrument. Although the 6S model was a successor to the 5S and slightly more advanced, the 5S was selected because outputs from the individual components comprising the short-wave radiation budget were more easily separated. The separation was necessary since both the 5S and 6S did not include geometrical corrections for terrain, a fundamental constituent of the BMPE algorithm. The 5S correction code was incorporated into the BMPE algorithm and many sensitivity studies were performed.
Estimating solar radiation for plant simulation models
NASA Technical Reports Server (NTRS)
Hodges, T.; French, V.; Leduc, S.
1985-01-01
Five algorithms producing daily solar radiation surrogates using daily temperatures and rainfall were evaluated using measured solar radiation data for seven U.S. locations. The algorithms were compared both in terms of accuracy of daily solar radiation estimates and terms of response when used in a plant growth simulation model (CERES-wheat). Requirements for accuracy of solar radiation for plant growth simulation models are discussed. One algorithm is recommended as being best suited for use in these models when neither measured nor satellite estimated solar radiation values are available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarepisheh, M; Li, R; Xing, L
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves quality of resultant treatment plans as compared with conventional VMAT or IMRT treatments.« less
NASA Astrophysics Data System (ADS)
Kwon, Hyuk Ju; Yeon, Sang Hun; Lee, Keum Ho; Lee, Kwang Ho
2018-02-01
As various studies focusing on building energy saving have been continuously conducted, studies utilizing renewable energy sources, instead of fossil fuel, are needed. In particular, studies regarding solar energy are being carried out in the field of building science; in order to utilize such solar energy effectively, solar radiation being brought into the indoors should be acquired and blocked properly. Blinds are a typical solar radiation control device that is capable of controlling indoor thermal and light environments. However, slat-type blinds are manually controlled, giving a negative effect on building energy saving. In this regard, studies regarding the automatic control of slat-type blinds have been carried out for the last couple of decades. Therefore, this study aims to provide preliminary data for optimal control research through the controlling of slat angle in slat-type blinds by comprehensively considering various input variables. The window area ratio and orientation were selected as input variables. It was found that an optimal control algorithm was different among each window-to-wall ratio and window orientation. In addition, through comparing and analyzing the building energy saving performance for each condition by applying the developed algorithms to simulations, up to 20.7 % energy saving was shown in the cooling period and up to 12.3 % energy saving was shown in the heating period. In addition, building energy saving effect was greater as the window area ratio increased given the same orientation, and the effects of window-to-wall ratio in the cooling period were higher than those of window-to-wall ratio in the heating period.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
GPU-BASED MONTE CARLO DUST RADIATIVE TRANSFER SCHEME APPLIED TO ACTIVE GALACTIC NUCLEI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heymann, Frank; Siebenmorgen, Ralf, E-mail: fheymann@pa.uky.edu
2012-05-20
A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting proceduremore » and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman and Wood method to reduce the calculation time, and the Fleck and Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 {mu}m silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.« less
NASA Technical Reports Server (NTRS)
Dubovik, O; Herman, M.; Holdak, A.; Lapyonok, T.; Taure, D.; Deuze, J. L.; Ducos, F.; Sinyuk, A.
2011-01-01
The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL microsatellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation.
Patient‐specific CT dosimetry calculation: a feasibility study
Xie, Huchen; Cheng, Jason Y.; Ning, Holly; Zhuge, Ying; Miller, Robert W.
2011-01-01
Current estimation of radiation dose from computed tomography (CT) scans on patients has relied on the measurement of Computed Tomography Dose Index (CTDI) in standard cylindrical phantoms, and calculations based on mathematical representations of “standard man”. Radiation dose to both adult and pediatric patients from a CT scan has been a concern, as noted in recent reports. The purpose of this study was to investigate the feasibility of adapting a radiation treatment planning system (RTPS) to provide patient‐specific CT dosimetry. A radiation treatment planning system was modified to calculate patient‐specific CT dose distributions, which can be represented by dose at specific points within an organ of interest, as well as organ dose‐volumes (after image segmentation) for a GE Light Speed Ultra Plus CT scanner. The RTPS calculation algorithm is based on a semi‐empirical, measured correction‐based algorithm, which has been well established in the radiotherapy community. Digital representations of the physical phantoms (virtual phantom) were acquired with the GE CT scanner in axial mode. Thermoluminescent dosimeter (TLDs) measurements in pediatric anthropomorphic phantoms were utilized to validate the dose at specific points within organs of interest relative to RTPS calculations and Monte Carlo simulations of the same virtual phantoms (digital representation). Congruence of the calculated and measured point doses for the same physical anthropomorphic phantom geometry was used to verify the feasibility of the method. The RTPS algorithm can be extended to calculate the organ dose by calculating a dose distribution point‐by‐point for a designated volume. Electron Gamma Shower (EGSnrc) codes for radiation transport calculations developed by National Research Council of Canada (NRCC) were utilized to perform the Monte Carlo (MC) simulation. In general, the RTPS and MC dose calculations are within 10% of the TLD measurements for the infant and child chest scans. With respect to the dose comparisons for the head, the RTPS dose calculations are slightly higher (10%–20%) than the TLD measurements, while the MC results were within 10% of the TLD measurements. The advantage of the algebraic dose calculation engine of the RTPS is a substantially reduced computation time (minutes vs. days) relative to Monte Carlo calculations, as well as providing patient‐specific dose estimation. It also provides the basis for a more elaborate reporting of dosimetric results, such as patient specific organ dose volumes after image segmentation. PACS numbers: 87.55.D‐, 87.57.Q‐, 87.53.Bn, 87.55.K‐ PMID:22089016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comstock, Jennifer M.; Protat, Alain; McFarlane, Sally A.
2013-05-22
Ground-based radar and lidar observations obtained at the Department of Energy’s Atmospheric Radiation Measurement Program’s Tropical Western Pacific site located in Darwin, Australia are used to retrieve ice cloud properties in anvil and cirrus clouds. Cloud microphysical properties derived from four different retrieval algorithms (two radar-lidar and two radar only algorithms) are compared by examining mean profiles and probability density functions of effective radius (Re), ice water content (IWC), extinction, ice number concentration, ice crystal fall speed, and vertical air velocity. Retrieval algorithm uncertainty is quantified using radiative flux closure exercises. The effect of uncertainty in retrieved quantities on themore » cloud radiative effect and radiative heating rates are presented. Our analysis shows that IWC compares well among algorithms, but Re shows significant discrepancies, which is attributed primarily to assumptions of particle shape. Uncertainty in Re and IWC translates into sometimes-large differences in cloud radiative effect (CRE) though the majority of cases have a CRE difference of roughly 10 W m-2 on average. These differences, which we believe are primarily driven by the uncertainty in Re, can cause up to 2 K/day difference in the radiative heating rates between algorithms.« less
Equivalent radiation source of 3D package for electromagnetic characteristics analysis
NASA Astrophysics Data System (ADS)
Li, Jun; Wei, Xingchang; Shu, Yufei
2017-10-01
An equivalent radiation source method is proposed to characterize electromagnetic emission and interference of complex three dimensional integrated circuits (IC) in this paper. The method utilizes amplitude-only near-field scanning data to reconstruct an equivalent magnetic dipole array, and the differential evolution optimization algorithm is proposed to extract the locations, orientation and moments of those dipoles. By importing the equivalent dipoles model into a 3D full-wave simulator together with the victim circuit model, the electromagnetic interference issues in mixed RF/digital systems can be well predicted. A commercial IC is used to validate the accuracy and efficiency of this proposed method. The coupled power at the victim antenna port calculated by the equivalent radiation source is compared with the measured data. Good consistency is obtained which confirms the validity and efficiency of the method. Project supported by the National Nature Science Foundation of China (No. 61274110).
NASA Technical Reports Server (NTRS)
Tsay, Si-Chee; Stamnes, Knut; Wiscombe, Warren; Laszlo, Istvan; Einaudi, Franco (Technical Monitor)
2000-01-01
This update reports a state-of-the-art discrete ordinate algorithm for monochromatic unpolarized radiative transfer in non-isothermal, vertically inhomogeneous, but horizontally homogeneous media. The physical processes included are Planckian thermal emission, scattering with arbitrary phase function, absorption, and surface bidirectional reflection. The system may be driven by parallel or isotropic diffuse radiation incident at the top boundary, as well as by internal thermal sources and thermal emission from the boundaries. Radiances, fluxes, and mean intensities are returned at user-specified angles and levels. DISORT has enjoyed considerable popularity in the atmospheric science and other communities since its introduction in 1988. Several new DISORT features are described in this update: intensity correction algorithms designed to compensate for the 8-M forward-peak scaling and obtain accurate intensities even in low orders of approximation; a more general surface bidirectional reflection option; and an exponential-linear approximation of the Planck function allowing more accurate solutions in the presence of large temperature gradients. DISORT has been designed to be an exemplar of good scientific software as well as a program of intrinsic utility. An extraordinary effort has been made to make it numerically well-conditioned, error-resistant, and user-friendly, and to take advantage of robust existing software tools. A thorough test suite is provided to verify the program both against published results, and for consistency where there are no published results. This careful attention to software design has been just as important in DISORT's popularity as its powerful algorithmic content.
Simultaneous beam sampling and aperture shape optimization for SPORT.
Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei
2015-02-01
Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.
Simultaneous beam sampling and aperture shape optimization for SPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less
A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-08
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual con-tours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (< 1 ms) with a satisfying accuracy (Dice = 0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system.
NASA's Black Marble Nighttime Lights Product Suite
NASA Technical Reports Server (NTRS)
Wang, Zhuosen; Sun, Qingsong; Seto, Karen C.; Oda, Tomohiro; Wolfe, Robert E.; Sarkar, Sudipta; Stevens, Joshua; Ramos Gonzalez, Olga M.; Detres, Yasmin; Esch, Thomas;
2018-01-01
NASA's Black Marble nighttime lights product suite (VNP46) is available at 500 meters resolution since January 2012 with data from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) onboard the Suomi National Polar-orbiting Platform (SNPP). The retrieval algorithm, developed and implemented for routine global processing at NASA's Land Science Investigator-led Processing System (SIPS), utilizes all high-quality, cloud-free, atmospheric-, terrain-, vegetation-, snow-, lunar-, and stray light-corrected radiances to estimate daily nighttime lights (NTL) and other intrinsic surface optical properties. Key algorithm enhancements include: (1) lunar irradiance modeling to resolve non-linear changes in phase and libration; (2) vector radiative transfer and lunar bidirectional surface anisotropic reflectance modeling to correct for atmospheric and BRDF (Bidirectional Reflectance Distribution Function) effects; (3) geometric-optical and canopy radiative transfer modeling to account for seasonal variations in NTL; and (4) temporal gap-filling to reduce persistent data gaps. Extensive benchmark tests at representative spatial and temporal scales were conducted on the VNP46 time series record to characterize the uncertainties stemming from upstream data sources. Initial validation results are presented together with example case studies illustrating the scientific utility of the products. This includes an evaluation of temporal patterns of NTL dynamics associated with urbanization, socioeconomic variability, cultural characteristics, and displaced populations affected by conflict. Current and planned activities under the Group on Earth Observations (GEO) Human Planet Initiative are aimed at evaluating the products at different geographic locations and time periods representing the full range of retrieval conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Mutic, S; Anastasio, M
Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less
Shaker, S B; Dirksen, A; Laursen, L C; Maltbaek, N; Christensen, L; Sander, U; Seersholm, N; Skovgaard, L T; Nielsen, L; Kok-Jensen, A
2004-07-01
To study the short-term reproducibility of lung density measurements by multi-slice computed tomography (CT) using three different radiation doses and three reconstruction algorithms. Twenty-five patients with smoker's emphysema and 25 patients with alpha1-antitrypsin deficiency underwent 3 scans at 2-week intervals. Low-dose protocol was applied, and images were reconstructed with bone, detail, and soft algorithms. Total lung volume (TLV), 15th percentile density (PD-15), and relative area at -910 Hounsfield units (RA-910) were obtained from the images using Pulmo-CMS software. Reproducibility of PD-15 and RA-910 and the influence of radiation dose, reconstruction algorithm, and type of emphysema were then analysed. The overall coefficient of variation of volume adjusted PD-15 for all combinations of radiation dose and reconstruction algorithm was 3.7%. The overall standard deviation of volume-adjusted RA-910 was 1.7% (corresponding to a coefficient of variation of 6.8%). Radiation dose, reconstruction algorithm, and type of emphysema had no significant influence on the reproducibility of PD-15 and RA-910. However, bone algorithm and very low radiation dose result in overestimation of the extent of emphysema. Lung density measurement by CT is a sensitive marker for quantitating both subtypes of emphysema. A CT-protocol with radiation dose down to 16 mAs and soft or detail reconstruction algorithm is recommended.
Snowfall Rate Retrieval using NPP ATMS Passive Microwave Measurements
NASA Technical Reports Server (NTRS)
Meng, Huan; Ferraro, Ralph; Kongoli, Cezar; Wang, Nai-Yu; Dong, Jun; Zavodsky, Bradley; Yan, Banghua; Zhao, Limin
2014-01-01
Passive microwave measurements at certain high frequencies are sensitive to the scattering effect of snow particles and can be utilized to retrieve snowfall properties. Some of the microwave sensors with snowfall sensitive channels are Advanced Microwave Sounding Unit (AMSU), Microwave Humidity Sounder (MHS) and Advance Technology Microwave Sounder (ATMS). ATMS is the follow-on sensor to AMSU and MHS. Currently, an AMSU and MHS based land snowfall rate (SFR) product is running operationally at NOAA/NESDIS. Based on the AMSU/MHS SFR, an ATMS SFR algorithm has been developed recently. The algorithm performs retrieval in three steps: snowfall detection, retrieval of cloud properties, and estimation of snow particle terminal velocity and snowfall rate. The snowfall detection component utilizes principal component analysis and a logistic regression model. The model employs a combination of temperature and water vapor sounding channels to detect the scattering signal from falling snow and derive the probability of snowfall (Kongoli et al., 2014). In addition, a set of NWP model based filters is also employed to improve the accuracy of snowfall detection. Cloud properties are retrieved using an inversion method with an iteration algorithm and a two-stream radiative transfer model (Yan et al., 2008). A method developed by Heymsfield and Westbrook (2010) is adopted to calculate snow particle terminal velocity. Finally, snowfall rate is computed by numerically solving a complex integral. The ATMS SFR product is validated against radar and gauge snowfall data and shows that the ATMS algorithm outperforms the AMSU/MHS SFR.
Significant Advances in the AIRS Science Team Version-6 Retrieval Algorithm
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena; Molnar, Gyula
2012-01-01
AIRS/AMSU is the state of the art infrared and microwave atmospheric sounding system flying aboard EOS Aqua. The Goddard DISC has analyzed AIRS/AMSU observations, covering the period September 2002 until the present, using the AIRS Science Team Version-S retrieval algorithm. These products have been used by many researchers to make significant advances in both climate and weather applications. The AIRS Science Team Version-6 Retrieval, which will become operation in mid-20l2, contains many significant theoretical and practical improvements compared to Version-5 which should further enhance the utility of AIRS products for both climate and weather applications. In particular, major changes have been made with regard to the algOrithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the retrieval procedure; 3) compute Outgoing Longwave Radiation; and 4) determine Quality Control. This paper will describe these advances found in the AIRS Version-6 retrieval algorithm and demonstrate the improvement of AIRS Version-6 products compared to those obtained using Version-5,
Using ACIS on the Chandra X-ray Observatory as a Particle Radiation Monitor II
NASA Technical Reports Server (NTRS)
Grant, C. E.; Ford, P. G.; Bautz, M. W.; ODell, S. L.
2012-01-01
The Advanced CCD Imaging Spectrometer is an instrument on the Chandra X-ray Observatory. CCDs are vulnerable to radiation damage, particularly by soft protons in the radiation belts and solar storms. The Chandra team has implemented procedures to protect ACIS during high-radiation events including autonomous protection triggered by an on-board radiation monitor. Elevated temperatures have reduced the effectiveness of the on-board monitor. The ACIS team has developed an algorithm which uses data from the CCDs themselves to detect periods of high radiation and a flight software patch to apply this algorithm is currently active on-board the instrument. In this paper, we explore the ACIS response to particle radiation through comparisons to a number of external measures of the radiation environment. We hope to better understand the efficiency of the algorithm as a function of the flux and spectrum of the particles and the time-profile of the radiation event.
A Predictor Analysis Framework for Surface Radiation Budget Reprocessing Using Design of Experiments
NASA Astrophysics Data System (ADS)
Quigley, Patricia Allison
Earth's Radiation Budget (ERB) is an accounting of all incoming energy from the sun and outgoing energy reflected and radiated to space by earth's surface and atmosphere. The National Aeronautics and Space Administration (NASA)/Global Energy and Water Cycle Experiment (GEWEX) Surface Radiation Budget (SRB) project produces and archives long-term datasets representative of this energy exchange system on a global scale. The data are comprised of the longwave and shortwave radiative components of the system and is algorithmically derived from satellite and atmospheric assimilation products, and acquired atmospheric data. It is stored as 3-hourly, daily, monthly/3-hourly, and monthly averages of 1° x 1° grid cells. Input parameters used by the algorithms are a key source of variability in the resulting output data sets. Sensitivity studies have been conducted to estimate the effects this variability has on the output data sets using linear techniques. This entails varying one input parameter at a time while keeping all others constant or by increasing all input parameters by equal random percentages, in effect changing input values for every cell for every three hour period and for every day in each month. This equates to almost 11 million independent changes without ever taking into consideration the interactions or dependencies among the input parameters. A more comprehensive method is proposed here for the evaluating the shortwave algorithm to identify both the input parameters and parameter interactions that most significantly affect the output data. This research utilized designed experiments that systematically and simultaneously varied all of the input parameters of the shortwave algorithm. A D-Optimal design of experiments (DOE) was chosen to accommodate the 14 types of atmospheric properties computed by the algorithm and to reduce the number of trials required by a full factorial study from millions to 128. A modified version of the algorithm was made available for testing such that global calculations of the algorithm were tuned to accept information for a single temporal and spatial point and for one month of averaged data. The points were from each of four atmospherically distinct regions to include the Amazon Rainforest, Sahara Desert, Indian Ocean and Mt. Everest. The same design was used for all of the regions. Least squares multiple regression analysis of the results of the modified algorithm identified those parameters and parameter interactions that most significantly affected the output products. It was found that Cosine solar zenith angle was the strongest influence on the output data in all four regions. The interaction of Cosine Solar Zenith Angle and Cloud Fraction had the strongest influence on the output data in the Amazon, Sahara Desert and Mt. Everest Regions, while the interaction of Cloud Fraction and Cloudy Shortwave Radiance most significantly affected output data in the Indian Ocean region. Second order response models were built using the resulting regression coefficients. A Monte Carlo simulation of each model extended the probability distribution beyond the initial design trials to quantify variability in the modeled output data.
Optimization-based image reconstruction from sparse-view data in offset-detector CBCT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan
2013-01-01
The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.
Aerosol physical properties from satellite horizon inversion
NASA Technical Reports Server (NTRS)
Gray, C. R.; Malchow, H. L.; Merritt, D. C.; Var, R. E.; Whitney, C. K.
1973-01-01
The feasibility is investigated of determining the physical properties of aerosols globally in the altitude region of 10 to 100 km from a satellite horizon scanning experiment. The investigation utilizes a horizon inversion technique previously developed and extended. Aerosol physical properties such as number density, size distribution, and the real and imaginary components of the index of refraction are demonstrated to be invertible in the aerosol size ranges (0.01-0.1 microns), (0.1-1.0 microns), (1.0-10 microns). Extensions of previously developed radiative transfer models and recursive inversion algorithms are displayed.
Microwave-based medical diagnosis using particle swarm optimization algorithm
NASA Astrophysics Data System (ADS)
Modiri, Arezoo
This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.
Shi, Yunzhou; Manco, Megan; Moyal, Dominique; Huppert, Gil; Araki, Hitoshi; Banks, Anthony; Joshi, Hemant; McKenzie, Richard; Seewald, Alex; Griffin, Guy; Sen-Gupta, Ellora; Wright, Donald; Bastien, Philippe; Valceschini, Florent; Seité, Sophie; Wright, John A; Ghaffari, Roozbeh; Rogers, John; Balooch, Guive; Pielak, Rafal M
2018-01-01
Excessive ultraviolet (UV) radiation induces acute and chronic effects on the skin, eye and immune system. Personalized monitoring of UV radiation is thus paramount to measure the extent of personal sun exposure, which could vary with environment, lifestyle, and sunscreen use. Here, we demonstrate an ultralow modulus, stretchable, skin-mounted UV patch that measures personal UV doses. The patch contains functional layers of ultrathin stretchable electronics and a photosensitive patterned dye that reacts to UV radiation. Color changes in the photosensitive dyes correspond to UV radiation intensity and are analyzed with a smartphone camera. A software application has feature recognition, lighting condition correction, and quantification algorithms that detect and quantify changes in color. These color changes are then correlated with corresponding shifts in UV dose, and compared to existing UV dose risk levels. The soft mechanics of the UV patch allow for multi-day wear in the presence of sunscreen and water. Two evaluation studies serve to demonstrate the utility of the UV patch during daily activities with and without sunscreen application.
Shi, Yunzhou; Manco, Megan; Moyal, Dominique; Huppert, Gil; Araki, Hitoshi; Banks, Anthony; Joshi, Hemant; McKenzie, Richard; Seewald, Alex; Griffin, Guy; Sen-Gupta, Ellora; Wright, Donald; Bastien, Philippe; Valceschini, Florent; Seité, Sophie; Wright, John A.; Ghaffari, Roozbeh; Rogers, John; Balooch, Guive
2018-01-01
Excessive ultraviolet (UV) radiation induces acute and chronic effects on the skin, eye and immune system. Personalized monitoring of UV radiation is thus paramount to measure the extent of personal sun exposure, which could vary with environment, lifestyle, and sunscreen use. Here, we demonstrate an ultralow modulus, stretchable, skin-mounted UV patch that measures personal UV doses. The patch contains functional layers of ultrathin stretchable electronics and a photosensitive patterned dye that reacts to UV radiation. Color changes in the photosensitive dyes correspond to UV radiation intensity and are analyzed with a smartphone camera. A software application has feature recognition, lighting condition correction, and quantification algorithms that detect and quantify changes in color. These color changes are then correlated with corresponding shifts in UV dose, and compared to existing UV dose risk levels. The soft mechanics of the UV patch allow for multi-day wear in the presence of sunscreen and water. Two evaluation studies serve to demonstrate the utility of the UV patch during daily activities with and without sunscreen application. PMID:29293664
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Choi, Sang H.; Chrisman, Dan A., Jr.; Samms, Richard W.
1987-01-01
Dynamic models and computer simulations were developed for the radiometric sensors utilized in the Earth Radiation Budget Experiment (ERBE). The models were developed to understand performance, improve measurement accuracy by updating model parameters and provide the constants needed for the count conversion algorithms. Model simulations were compared with the sensor's actual responses demonstrated in the ground and inflight calibrations. The models consider thermal and radiative exchange effects, surface specularity, spectral dependence of a filter, radiative interactions among an enclosure's nodes, partial specular and diffuse enclosure surface characteristics and steady-state and transient sensor responses. Relatively few sensor nodes were chosen for the models since there is an accuracy tradeoff between increasing the number of nodes and approximating parameters such as the sensor's size, material properties, geometry, and enclosure surface characteristics. Given that the temperature gradients within a node and between nodes are small enough, approximating with only a few nodes does not jeopardize the accuracy required to perform the parameter estimates and error analyses.
NASA Astrophysics Data System (ADS)
Mladenova, I. E.; Jackson, T. J.; Bindlish, R.; Njoku, E. G.; Chan, S.; Cosh, M. H.
2012-12-01
We are currently evaluating potential improvements to the standard NASA global soil moisture product derived using observations acquired from the Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E). A major component of this effort is a thorough review of the theoretical basis of available passive-based soil moisture retrieval algorithms suitable for operational implementation. Several agencies provide routine soil moisture products. Our research focuses on five well-establish techniques that are capable of carrying out global retrieval using the same AMSR-E data set as the NASA approach (i.e. X-band brightness temperature data). In general, most passive-based algorithms include two major components: radiative transfer modeling, which provides the smooth surface reflectivity properties of the soil surface, and a complex dielectric constant model of the soil-water mixture. These two components are related through the Fresnel reflectivity equations. Furthermore, the land surface temperature, vegetation, roughness and soil properties need to be adequately accounted for in the radiative transfer and dielectric modeling. All of the available approaches we have examined follow the general data processing flow described above, however, the actual solutions as well as the final products can be very different. This is primarily a result of the assumptions, number of sensor variables utilized, the selected ancillary data sets and approaches used to account for the effect of the additional geophysical variables impacting the measured signal. The operational NASA AMSR-E-based retrievals have been shown to have a dampened temporal response and sensitivity range. Two possible approaches to addressing these issues are being evaluated: enhancing the theoretical basis of the existing algorithm, if feasible, or directly adjusting the dynamic range of the final soil moisture product. Both of these aspects are being actively investigated and will be discussed in our talk. Improving the quality and reliability of the global soil moisture product would result in greater acceptance and utilization in the related applications. USDA is an equal opportunity provider and employer.
Habitat Design Optimization and Analysis
NASA Technical Reports Server (NTRS)
SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.
2006-01-01
Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.
Algorithmic vs. finite difference Jacobians for infrared atmospheric radiative transfer
NASA Astrophysics Data System (ADS)
Schreier, Franz; Gimeno García, Sebastián; Vasquez, Mayte; Xu, Jian
2015-10-01
Jacobians, i.e. partial derivatives of the radiance and transmission spectrum with respect to the atmospheric state parameters to be retrieved from remote sensing observations, are important for the iterative solution of the nonlinear inverse problem. Finite difference Jacobians are easy to implement, but computationally expensive and possibly of dubious quality; on the other hand, analytical Jacobians are accurate and efficient, but the implementation can be quite demanding. GARLIC, our "Generic Atmospheric Radiation Line-by-line Infrared Code", utilizes algorithmic differentiation (AD) techniques to implement derivatives w.r.t. atmospheric temperature and molecular concentrations. In this paper, we describe our approach for differentiation of the high resolution infrared and microwave spectra and provide an in-depth assessment of finite difference approximations using "exact" AD Jacobians as a reference. The results indicate that the "standard" two-point finite differences with 1 K and 1% perturbation for temperature and volume mixing ratio, respectively, can exhibit substantial errors, and central differences are significantly better. However, these deviations do not transfer into the truncated singular value decomposition solution of a least squares problem. Nevertheless, AD Jacobians are clearly recommended because of the superior speed and accuracy.
Nagata, Koichi; Pethel, Timothy D
2017-07-01
Although anisotropic analytical algorithm (AAA) and Acuros XB (AXB) are both radiation dose calculation algorithms that take into account the heterogeneity within the radiation field, Acuros XB is inherently more accurate. The purpose of this retrospective method comparison study was to compare them and evaluate the dose discrepancy within the planning target volume (PTV). Radiation therapy (RT) plans of 11 dogs with intranasal tumors treated by radiation therapy at the University of Georgia were evaluated. All dogs were planned for intensity-modulated radiation therapy using nine coplanar X-ray beams that were equally spaced, then dose calculated with anisotropic analytical algorithm. The same plan with the same monitor units was then recalculated using Acuros XB for comparisons. Each dog's planning target volume was separated into air, bone, and tissue and evaluated. The mean dose to the planning target volume estimated by Acuros XB was 1.3% lower. It was 1.4% higher for air, 3.7% lower for bone, and 0.9% lower for tissue. The volume of planning target volume covered by the prescribed dose decreased by 21% when Acuros XB was used due to increased dose heterogeneity within the planning target volume. Anisotropic analytical algorithm relatively underestimates the dose heterogeneity and relatively overestimates the dose to the bone and tissue within the planning target volume for the radiation therapy planning of canine intranasal tumors. This can be clinically significant especially if the tumor cells are present within the bone, because it may result in relative underdosing of the tumor. © 2017 American College of Veterinary Radiology.
The Langley Parameterized Shortwave Algorithm (LPSA) for Surface Radiation Budget Studies. 1.0
NASA Technical Reports Server (NTRS)
Gupta, Shashi K.; Kratz, David P.; Stackhouse, Paul W., Jr.; Wilber, Anne C.
2001-01-01
An efficient algorithm was developed during the late 1980's and early 1990's by W. F. Staylor at NASA/LaRC for the purpose of deriving shortwave surface radiation budget parameters on a global scale. While the algorithm produced results in good agreement with observations, the lack of proper documentation resulted in a weak acceptance by the science community. The primary purpose of this report is to develop detailed documentation of the algorithm. In the process, the algorithm was modified whenever discrepancies were found between the algorithm and its referenced literature sources. In some instances, assumptions made in the algorithm could not be justified and were replaced with those that were justifiable. The algorithm uses satellite and operational meteorological data for inputs. Most of the original data sources have been replaced by more recent, higher quality data sources, and fluxes are now computed on a higher spatial resolution. Many more changes to the basic radiation scheme and meteorological inputs have been proposed to improve the algorithm and make the product more useful for new research projects. Because of the many changes already in place and more planned for the future, the algorithm has been renamed the Langley Parameterized Shortwave Algorithm (LPSA).
Turbulent Flow past High Temperature Surfaces
NASA Astrophysics Data System (ADS)
Mehmedagic, Igbal; Thangam, Siva; Carlucci, Pasquale; Buckley, Liam; Carlucci, Donald
2014-11-01
Flow over high-temperature surfaces subject to wall heating is analyzed with applications to projectile design. In this study, computations are performed using an anisotropic Reynolds-stress model to study flow past surfaces that are subject to radiative flux. The model utilizes a phenomenological treatment of the energy spectrum and diffusivities of momentum and heat to include the effects of wall heat transfer and radiative exchange. The radiative transport is modeled using Eddington approximation including the weighted effect of nongrayness of the fluid. The time-averaged equations of motion and energy are solved using the modeled form of transport equations for the turbulence kinetic energy and the scalar form of turbulence dissipation with an efficient finite-volume algorithm. The model is applied for available test cases to validate its predictive capabilities for capturing the effects of wall heat transfer. Computational results are compared with experimental data available in the literature. Applications involving the design of projectiles are summarized. Funded in part by U.S. Army, ARDEC.
NASA Astrophysics Data System (ADS)
Deshpande, Ruchi; DeMarco, John; Liu, Brent J.
2015-03-01
We have developed a comprehensive DICOM RT specific database of retrospective treatment planning data for radiation therapy of head and neck cancer. Further, we have designed and built an imaging informatics module that utilizes this database to perform data mining. The end-goal of this data mining system is to provide radiation therapy decision support for incoming head and neck cancer patients, by identifying best practices from previous patients who had the most similar tumor geometries. Since the performance of such systems often depends on the size and quality of the retrospective database, we have also placed an emphasis on developing infrastructure and strategies to encourage data sharing and participation from multiple institutions. The infrastructure and decision support algorithm have both been tested and evaluated with 51 sets of retrospective treatment planning data of head and neck cancer patients. We will present the overall design and architecture of our system, an overview of our decision support mechanism as well as the results of our evaluation.
NASA Technical Reports Server (NTRS)
Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.
1992-01-01
A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.
NASA Technical Reports Server (NTRS)
Menzel, Paul; Prins, Elaine
1995-01-01
This study attempts to assess the extent of burning and associated aerosol transport regimes in South America and the South Atlantic using geostationary satellite observations, in order to explore the possible roles of biomass burning in climate change and more directly in atmospheric chemistry and radiative transfer processes. Modeling and analysis efforts have suggested that the direct and indirect radiative effects of aerosols from biomass burning may play a major role in the radiative balance of the earth and are an important factor in climate change calculations. One of the most active regions of biomass burning is located in South America, associated with deforestation in the selva (forest), grassland management, and other agricultural practices. As part of the NASA Aerosol Interdisciplinary Program, we are utilizing GOES-7 (1988) and GOES-8 (1995) visible and multispectral infrared data (4, 11, and 12 microns) to document daily biomass burning activity in South America and to distinguish smoke/aerosols from other multi-level clouds and low-level moisture. This study catalogues the areal extent and transport of smoke/aerosols throughout the region and over the Atlantic Ocean for the 1988 (July-September) and 1995 (June-October) biomass burning seasons. The smoke/haze cover estimates are compared to the locations of fires to determine the source and verify the haze is actually associated with biomass burning activities. The temporal resolution of the GOES data (half-hourly in South America) makes it possible to determine the prevailing circulation and transport of aerosols by considering a series of visible and infrared images and tracking the motion of smoke, haze and adjacent clouds. The study area extends from 40 to 70 deg W and 0 to 40 deg S with aerosol coverage extending over the Atlantic Ocean when necessary. Fire activity is estimated with the GOES Automated Biomass Burning Algorithm (ABBA). To date, our efforts have focused on GOES-7 and GOES-8 ABBA development, algorithm development for aerosol monitoring, data acquisition and archiving, and participation in the SCAR-C and SCAR-B field programs which have provided valuable information for algorithm testing and validation. Implementation of the initial version of the GEOS-8 ABBA on case studies in North, Central, and South America has demonstrated the improved capability for monitoring diurnal fire activity and smoke/aerosol transport with the GOES-8 throughout the Western Hemisphere.
Determination of cloud parameters from infrared sounder data
NASA Technical Reports Server (NTRS)
Yeh, H.-Y. M.
1984-01-01
The World Climate Research Programme (WCRP) plan is concerned with the need to develop a uniform global cloud climatology as part of a broad research program on climate processes. The International Satellite Cloud Climatology Project (ISCCP) has been approved as the first project of the WCRP. The ISCCP has the basic objective to collect and analyze satellite radiance data to infer the global distribution of cloud radiative properties in order to improve the modeling of cloud effects on climate. Research is conducted to explore an algorithm for retrieving cloud properties by utilizing the available infrared sounder data from polar-orbiting satellites. A numerical method is developed for computing cloud top heights, amount, and emissivity on the basis of a parameterized infrared radiative transfer equation for cloudy atmospheres. Theoretical studies were carried out by considering a synthetic atmosphere.
NASA Astrophysics Data System (ADS)
Hashimoto, H.; Wang, W.; Ganguly, S.; Li, S.; Michaelis, A.; Higuchi, A.; Takenaka, H.; Nemani, R. R.
2017-12-01
New geostationary sensors such as the AHI (Advanced Himawari Imager on Himawari-8) and the ABI (Advanced Baseline Imager on GOES-16) have the potential to advance ecosystem modeling particularly of diurnally varying phenomenon through frequent observations. These sensors have similar channels as in MODIS (MODerate resolution Imaging Spectroradiometer), and allow us to utilize the knowledge and experience in MODIS data processing. Here, we developed sub-hourly Gross Primary Production (GPP) algorithm, leverating the MODIS 17 GPP algorithm. We run the model at 1-km resolution over Japan and Australia using geo-corrected AHI data. Solar radiation was directly calculated from AHI using a neural network technique. The other necessary climate data were derived from weather stations and other satellite data. The sub-hourly estimates of GPP were first compared with ground-measured GPP at various Fluxnet sites. We also compared the AHI GPP with MODIS 17 GPP, and analyzed the differences in spatial patterns and the effect of diurnal changes in climate forcing. The sub-hourly GPP products require massive storage and strong computational power. We use NEX (NASA Earth Exchange) facility to produce the GPP products. This GPP algorithm can be applied to other geostationary satellites including GOES-16 in future.
Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A
2015-06-01
Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S; Sen, Satyabrata; Berry, M. L..
Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) program supported the development of networks of commercial-off-the-shelf (COTS) radiation counters for detecting, localizing, and identifying low-level radiation sources. Under this program, a series of indoor and outdoor tests were conducted with multiple source strengths and types, different background profiles, and various types of source and detector movements. Following the tests, network algorithms were replayed in various re-constructed scenarios using sub-networks. These measurements and algorithm traces together provide a rich collection of highly valuable datasets for testing the current and next generation radiation network algorithms, including the ones (tomore » be) developed by broader R&D communities such as distributed detection, information fusion, and sensor networks. From this multiple TeraByte IRSS database, we distilled out and packaged the first batch of canonical datasets for public release. They include measurements from ten indoor and two outdoor tests which represent increasingly challenging baseline scenarios for robustly testing radiation network algorithms.« less
Estimation of UV index in the clear-sky using OMI PROFOZ and AERONET data
NASA Astrophysics Data System (ADS)
Lee, H.; Kim, J.; Jeong, U.
2016-12-01
Due to a strong influence to the human health and ecosystem environment, continuous monitoring of the surface-level ultraviolet (UV) radiation is important nowadays. UV index (UVI) is a simple parameter to show the strength of surface UV radiation, therefore UVI has been widely utilized for the purpose of UV monitoring. In this work, we also try to develop our own retrieval algorithm for better estimation of UVI. The amount of UVA (320-400 nm) and UVB (290-320 nm) radiation at the Earth surface depends on the extent of Rayleigh scattering by atmospheric gas molecules, the radiative absorption by ozone, radiative scattering by clouds, and both absorption and scattering by airborne aerosols. Thus advanced consideration of these factors is the essential part to establish the process of UVI estimation. In this study, we estimate UV Index (UVI) at Seoul first in a clear-sky atmosphere, and then validate this estimated UVI comparing to UVI from Brewer spectrophotometer measurements located at Yonsei University in Seoul. We use the Vector Linearized Discrete Ordinate Radiative Transfer (VLIDORT) model version 2.6 for our UVI calculation. To consider the ozone and aerosol influence in a real situation, we input ozone and temperature profiles from the Ozone Monitoring Instrument (OMI) Aura vertical profile ozone (PROFOZ) data, and aerosol properties from the AErosol RObotic NETwork (AERONET) measurements at Seoul into the model. Inter-comparison of UVI is performed for the year 2011, 2012 and 2014, and resulted in a high correlation coefficient (R=0.95) under clear-sky condition. But a slight overestimation of Brewer UVI occurred under high AOD conditions in clear-sky. Because our UVI algorithm does not account for surface absorbing aerosols, it is lead to systematic overestimation of surface UV irradiances. Therefore, we also investigate the effect of absorbing aerosol on the amount of UV irradiance in the clear-sky over East Asia.
King, David A.; Bachelet, Dominique M.; Symstad, Amy J.; Ferschweiler, Ken; Hobbins, Michael
2014-01-01
The potential evapotranspiration (PET) that would occur with unlimited plant access to water is a central driver of simulated plant growth in many ecological models. PET is influenced by solar and longwave radiation, temperature, wind speed, and humidity, but it is often modeled as a function of temperature alone. This approach can cause biases in projections of future climate impacts in part because it confounds the effects of warming due to increased greenhouse gases with that which would be caused by increased radiation from the sun. We developed an algorithm for linking PET to extraterrestrial solar radiation (incoming top-of atmosphere solar radiation), as well as temperature and atmospheric water vapor pressure, and incorporated this algorithm into the dynamic global vegetation model MC1. We tested the new algorithm for the Northern Great Plains, USA, whose remaining grasslands are threatened by continuing woody encroachment. Both the new and the standard temperature-dependent MC1 algorithm adequately simulated current PET, as compared to the more rigorous PenPan model of Rotstayn et al. (2006). However, compared to the standard algorithm, the new algorithm projected a much more gradual increase in PET over the 21st century for three contrasting future climates. This difference led to lower simulated drought effects and hence greater woody encroachment with the new algorithm, illustrating the importance of more rigorous calculations of PET in ecological models dealing with climate change.
Temporally separating Cherenkov radiation in a scintillator probe exposed to a pulsed X-ray beam.
Archer, James; Madden, Levi; Li, Enbang; Carolan, Martin; Petasecca, Marco; Metcalfe, Peter; Rosenfeld, Anatoly
2017-10-01
Cherenkov radiation is generated in optical systems exposed to ionising radiation. In water or plastic devices, if the incident radiation has components with high enough energy (for example, electrons or positrons with energy greater than 175keV), Cherenkov radiation will be generated. A scintillator dosimeter that collects optical light, guided by optical fibre, will have Cherenkov radiation generated throughout the length of fibre exposed to the radiation field and compromise the signal. We present a novel algorithm to separate Cherenkov radiation signal that requires only a single probe, provided the radiation source is pulsed, such as a linear accelerator in external beam radiation therapy. We use a slow scintillator (BC-444) that, in a constant beam of radiation, reaches peak light output after 1 microsecond, while the Cherenkov signal is detected nearly instantly. This allows our algorithm to separate the scintillator signal from the Cherenkov signal. The relative beam profile and depth dose of a linear accelerator 6MV X-ray field were reconstructed using the algorithm. The optimisation method improved the fit to the ionisation chamber data and improved the reliability of the measurements. The algorithm was able to remove 74% of the Cherenkov light, at the expense of only 1.5% scintillation light. Further characterisation of the Cherenkov radiation signal has the potential to improve the results and allow this method to be used as a simpler optical fibre dosimeter for quality assurance in external beam therapy. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A comparative study of automatic image segmentation algorithms for target tracking in MR-IGRT.
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa; Hu, Yanle
2016-03-01
On-board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real-time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image-guided radiotherapy (MR-IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR-TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR-TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD-LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP-TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high-contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR-TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on-board MR-IGRT system. PACS number(s): 87.57.nm, 87.57.N-, 87.61.Tg. © 2016 The Authors.
A comparative study of automatic image segmentation algorithms for target tracking in MR‐IGRT
Feng, Yuan; Kawrakow, Iwan; Olsen, Jeff; Parikh, Parag J.; Noel, Camille; Wooten, Omar; Du, Dongsu; Mutic, Sasa
2016-01-01
On‐board magnetic resonance (MR) image guidance during radiation therapy offers the potential for more accurate treatment delivery. To utilize the real‐time image information, a crucial prerequisite is the ability to successfully segment and track regions of interest (ROI). The purpose of this work is to evaluate the performance of different segmentation algorithms using motion images (4 frames per second) acquired using a MR image‐guided radiotherapy (MR‐IGRT) system. Manual contours of the kidney, bladder, duodenum, and a liver tumor by an experienced radiation oncologist were used as the ground truth for performance evaluation. Besides the manual segmentation, images were automatically segmented using thresholding, fuzzy k‐means (FKM), k‐harmonic means (KHM), and reaction‐diffusion level set evolution (RD‐LSE) algorithms, as well as the tissue tracking algorithm provided by the ViewRay treatment planning and delivery system (VR‐TPDS). The performance of the five algorithms was evaluated quantitatively by comparing with the manual segmentation using the Dice coefficient and target registration error (TRE) measured as the distance between the centroid of the manual ROI and the centroid of the automatically segmented ROI. All methods were able to successfully segment the bladder and the kidney, but only FKM, KHM, and VR‐TPDS were able to segment the liver tumor and the duodenum. The performance of the thresholding, FKM, KHM, and RD‐LSE algorithms degraded as the local image contrast decreased, whereas the performance of the VP‐TPDS method was nearly independent of local image contrast due to the reference registration algorithm. For segmenting high‐contrast images (i.e., kidney), the thresholding method provided the best speed (<1 ms) with a satisfying accuracy (Dice=0.95). When the image contrast was low, the VR‐TPDS method had the best automatic contour. Results suggest an image quality determination procedure before segmentation and a combination of different methods for optimal segmentation with the on‐board MR‐IGRT system. PACS number(s): 87.57.nm, 87.57.N‐, 87.61.Tg
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.
2009-08-07
This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less
Radio Frequency Interference Detection for Passive Remote Sensing Using Eigenvalue Analysis
NASA Technical Reports Server (NTRS)
Schoenwald, Adam; Kim, Seung-Jun; Mohammed-Tano, Priscilla
2017-01-01
Radio frequency interference (RFI) can corrupt passive remote sensing measurements taken with microwave radiometers. With the increasingly utilized spectrum and the push for larger bandwidth radiometers, the likelihood of RFI contamination has grown significantly. In this work, an eigenvalue-based algorithm is developed to detect the presence of RFI and provide estimates of RFI-free radiation levels. Simulated tests show that the proposed detector outperforms conventional kurtosis-based RFI detectors in the low-to-medium interferece-to-noise-power-ratio (INR) regime under continuous wave (CW) and quadrature phase shift keying (QPSK) RFIs.
Radio Frequency Interference Detection for Passive Remote Sensing Using Eigenvalue Analysis
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Kim, Seung-Jun; Mohammed, Priscilla N.
2017-01-01
Radio frequency interference (RFI) can corrupt passive remote sensing measurements taken with microwave radiometers. With the increasingly utilized spectrum and the push for larger bandwidth radiometers, the likelihood of RFI contamination has grown significantly. In this work, an eigenvalue-based algorithm is developed to detect the presence of RFI and provide estimates of RFI-free radiation levels. Simulated tests show that the proposed detector outperforms conventional kurtosis-based RFI detectors in the low-to-medium interference-to-noise-power-ratio (INR) regime under continuous wave (CW) and quadrature phase shift keying (QPSK) RFIs.
MO-FG-202-01: A Fast Yet Sensitive EPID-Based Real-Time Treatment Verification System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmad, M; Nourzadeh, H; Neal, B
2016-06-15
Purpose: To create a real-time EPID-based treatment verification system which robustly detects treatment delivery and patient attenuation variations. Methods: Treatment plan DICOM files sent to the record-and-verify system are captured and utilized to predict EPID images for each planned control point using a modified GPU-based digitally reconstructed radiograph algorithm which accounts for the patient attenuation, source energy fluence, source size effects, and MLC attenuation. The DICOM and predicted images are utilized by our C++ treatment verification software which compares EPID acquired 1024×768 resolution frames acquired at ∼8.5hz from Varian Truebeam™ system. To maximize detection sensitivity, image comparisons determine (1) ifmore » radiation exists outside of the desired treatment field; (2) if radiation is lacking inside the treatment field; (3) if translations, rotations, and magnifications of the image are within tolerance. Acquisition was tested with known test fields and prior patient fields. Error detection was tested in real-time and utilizing images acquired during treatment with another system. Results: The computational time of the prediction algorithms, for a patient plan with 350 control points and 60×60×42cm^3 CT volume, is 2–3minutes on CPU and <27 seconds on GPU for 1024×768 images. The verification software requires a maximum of ∼9ms and ∼19ms for 512×384 and 1024×768 resolution images, respectively, to perform image analysis and dosimetric validations. Typical variations in geometric parameters between reference and the measured images are 0.32°for gantry rotation, 1.006 for scaling factor, and 0.67mm for translation. For excess out-of-field/missing in-field fluence, with masks extending 1mm (at isocenter) from the detected aperture edge, the average total in-field area missing EPID fluence was 1.5mm2 the out-of-field excess EPID fluence was 8mm^2, both below error tolerances. Conclusion: A real-time verification software, with EPID images prediction algorithm, was developed. The system is capable of performing verifications between frames acquisitions and identifying source(s) of any out-of-tolerance variations. This work was supported in part by Varian Medical Systems.« less
Radiative Transfer Modeling and Retrievals for Advanced Hyperspectral Sensors
NASA Technical Reports Server (NTRS)
Liu, Xu; Zhou, Daniel K.; Larar, Allen M.; Smith, William L., Sr.; Mango, Stephen A.
2009-01-01
A novel radiative transfer model and a physical inversion algorithm based on principal component analysis will be presented. Instead of dealing with channel radiances, the new approach fits principal component scores of these quantities. Compared to channel-based radiative transfer models, the new approach compresses radiances into a much smaller dimension making both forward modeling and inversion algorithm more efficient.
Optimal aperture synthesis radar imaging
NASA Astrophysics Data System (ADS)
Hysell, D. L.; Chau, J. L.
2006-03-01
Aperture synthesis radar imaging has been used to investigate coherent backscatter from ionospheric plasma irregularities at Jicamarca and elsewhere for several years. Phenomena of interest include equatorial spread F, 150-km echoes, the equatorial electrojet, range-spread meteor trails, and mesospheric echoes. The sought-after images are related to spaced-receiver data mathematically through an integral transform, but direct inversion is generally impractical or suboptimal. We instead turn to statistical inverse theory, endeavoring to utilize fully all available information in the data inversion. The imaging algorithm used at Jicamarca is based on an implementation of the MaxEnt method developed for radio astronomy. Its strategy is to limit the space of candidate images to those that are positive definite, consistent with data to the degree required by experimental confidence limits; smooth (in some sense); and most representative of the class of possible solutions. The algorithm was improved recently by (1) incorporating the antenna radiation pattern in the prior probability and (2) estimating and including the full error covariance matrix in the constraints. The revised algorithm is evaluated using new 28-baseline electrojet data from Jicamarca.
SU-F-J-23: Field-Of-View Expansion in Cone-Beam CT Reconstruction by Use of Prior Information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haga, A; Magome, T; Nakano, M
Purpose: Cone-beam CT (CBCT) has become an integral part of online patient setup in an image-guided radiation therapy (IGRT). In addition, the utility of CBCT for dose calculation has actively been investigated. However, the limited size of field-of-view (FOV) and resulted CBCT image with a lack of peripheral area of patient body prevents the reliability of dose calculation. In this study, we aim to develop an FOV expanded CBCT in IGRT system to allow the dose calculation. Methods: Three lung cancer patients were selected in this study. We collected the cone-beam projection images in the CBCT-based IGRT system (X-ray volumemore » imaging unit, ELEKTA), where FOV size of the provided CBCT with these projections was 410 × 410 mm{sup 2} (normal FOV). Using these projections, CBCT with a size of 728 × 728 mm{sup 2} was reconstructed by a posteriori estimation algorithm including a prior image constrained compressed sensing (PICCS). The treatment planning CT was used as a prior image. To assess the effectiveness of FOV expansion, a dose calculation was performed on the expanded CBCT image with region-of-interest (ROI) density mapping method, and it was compared with that of treatment planning CT as well as that of CBCT reconstructed by filtered back projection (FBP) algorithm. Results: A posteriori estimation algorithm with PICCS clearly visualized an area outside normal FOV, whereas the FBP algorithm yielded severe streak artifacts outside normal FOV due to under-sampling. The dose calculation result using the expanded CBCT agreed with that using treatment planning CT very well; a maximum dose difference was 1.3% for gross tumor volumes. Conclusion: With a posteriori estimation algorithm, FOV in CBCT can be expanded. Dose comparison results suggested that the use of expanded CBCTs is acceptable for dose calculation in adaptive radiation therapy. This study has been supported by KAKENHI (15K08691).« less
Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan
2018-02-01
Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.
Radionuclide identification algorithm for organic scintillator-based radiation portal monitor
NASA Astrophysics Data System (ADS)
Paff, Marc Gerrit; Di Fulvio, Angela; Clarke, Shaun D.; Pozzi, Sara A.
2017-03-01
We have developed an algorithm for on-the-fly radionuclide identification for radiation portal monitors using organic scintillation detectors. The algorithm was demonstrated on experimental data acquired with our pedestrian portal monitor on moving special nuclear material and industrial sources at a purpose-built radiation portal monitor testing facility. The experimental data also included common medical isotopes. The algorithm takes the power spectral density of the cumulative distribution function of the measured pulse height distributions and matches these to reference spectra using a spectral angle mapper. F-score analysis showed that the new algorithm exhibited significant performance improvements over previously implemented radionuclide identification algorithms for organic scintillators. Reliable on-the-fly radionuclide identification would help portal monitor operators more effectively screen out the hundreds of thousands of nuisance alarms they encounter annually due to recent nuclear-medicine patients and cargo containing naturally occurring radioactive material. Portal monitor operators could instead focus on the rare but potentially high impact incidents of nuclear and radiological material smuggling detection for which portal monitors are intended.
NASA Astrophysics Data System (ADS)
Premaratne, Pavithra Dhanuka
Disruption and fragmentation of an asteroid using nuclear explosive devices (NEDs) is a highly complex yet a practical solution to mitigating the impact threat of asteroids with short warning time. A Hypervelocity Asteroid Intercept Vehicle (HAIV) concept, developed at the Asteroid Deflection Research Center (ADRC), consists of a primary vehicle that acts as kinetic impactor and a secondary vehicle that houses NEDs. The kinetic impactor (lead vehicle) strikes the asteroid creating a crater. The secondary vehicle will immediately enter the crater and detonate its nuclear payload creating a blast wave powerful enough to fragment the asteroid. The nuclear subsurface explosion modeling and hydrodynamic simulation has been a challenging research goal that paves the way an array of mission critical information. A mesh-free hydrodynamic simulation method, Smoothed Particle Hydrodynamics (SPH) was utilized to obtain both qualitative and quantitative solutions for explosion efficiency. Commercial fluid dynamics packages such as AUTODYN along with the in-house GPU accelerated SPH algorithms were used to validate and optimize high-energy explosion dynamics for a variety of test cases. Energy coupling from the NED to the target body was also examined to determine the effectiveness of nuclear subsurface explosions. Success of a disruption mission also depends on the survivability of the nuclear payload when the secondary vehicle approaches the newly formed crater at a velocity of 10 km/s or higher. The vehicle may come into contact with debris ejecting the crater which required the conceptual development of a Whipple shield. As the vehicle closes on the crater, its skin may also experience extreme temperatures due to heat radiated from the crater bottom. In order to address this thermal problem, a simple metallic thermal shield design was implemented utilizing a radiative heat transfer algorithm and nodal solutions obtained from hydrodynamic simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, Zhiling; Wei, Wei; Turlapaty, Anish
2012-07-01
At the United States Army's test sites, fired penetrators made of Depleted Uranium (DU) have been buried under ground and become hazardous waste. Previously, we developed techniques for detecting buried radioactive targets. We also developed approaches for locating buried paramagnetic metal objects by utilizing the electromagnetic induction (EMI) sensor data. In this paper, we apply data fusion techniques to combine results from both the radiation detection and the EMI detection, so that we can further distinguish among DU penetrators, DU oxide, and non- DU metal debris. We develop a two-step fusion approach for the task, and test it with surveymore » data collected on simulation targets. In this work, we explored radiation and EMI data fusion for detecting DU, oxides, and non-DU metals. We developed a two-step fusion approach based on majority voting and a set of decision rules. With this approach, we fuse results from radiation detection based on the RX algorithm and EMI detection based on a 3-step analysis. Our fusion approach has been tested successfully with data collected on simulation targets. In the future, we will need to further verify the effectiveness of this fusion approach with field data. (authors)« less
Goodarzi, M; Safaei, M R; Oztop, Hakan F; Karimipour, A; Sadeghinezhad, E; Dahari, M; Kazi, S N; Jomhari, N
2014-01-01
The effect of radiation on laminar and turbulent mixed convection heat transfer of a semitransparent medium in a square enclosure was studied numerically using the Finite Volume Method. A structured mesh and the SIMPLE algorithm were utilized to model the governing equations. Turbulence and radiation were modeled with the RNG k-ε model and Discrete Ordinates (DO) model, respectively. For Richardson numbers ranging from 0.1 to 10, simulations were performed for Rayleigh numbers in laminar flow (10⁴) and turbulent flow (10⁸). The model predictions were validated against previous numerical studies and good agreement was observed. The simulated results indicate that for laminar and turbulent motion states, computing the radiation heat transfer significantly enhanced the Nusselt number (Nu) as well as the heat transfer coefficient. Higher Richardson numbers did not noticeably affect the average Nusselt number and corresponding heat transfer rate. Besides, as expected, the heat transfer rate for the turbulent flow regime surpassed that in the laminar regime. The simulations additionally demonstrated that for a constant Richardson number, computing the radiation heat transfer majorly affected the heat transfer structure in the enclosure; however, its impact on the fluid flow structure was negligible.
Goodarzi, M.; Safaei, M. R.; Oztop, Hakan F.; Karimipour, A.; Sadeghinezhad, E.; Dahari, M.; Kazi, S. N.; Jomhari, N.
2014-01-01
The effect of radiation on laminar and turbulent mixed convection heat transfer of a semitransparent medium in a square enclosure was studied numerically using the Finite Volume Method. A structured mesh and the SIMPLE algorithm were utilized to model the governing equations. Turbulence and radiation were modeled with the RNG k-ε model and Discrete Ordinates (DO) model, respectively. For Richardson numbers ranging from 0.1 to 10, simulations were performed for Rayleigh numbers in laminar flow (104) and turbulent flow (108). The model predictions were validated against previous numerical studies and good agreement was observed. The simulated results indicate that for laminar and turbulent motion states, computing the radiation heat transfer significantly enhanced the Nusselt number (Nu) as well as the heat transfer coefficient. Higher Richardson numbers did not noticeably affect the average Nusselt number and corresponding heat transfer rate. Besides, as expected, the heat transfer rate for the turbulent flow regime surpassed that in the laminar regime. The simulations additionally demonstrated that for a constant Richardson number, computing the radiation heat transfer majorly affected the heat transfer structure in the enclosure; however, its impact on the fluid flow structure was negligible. PMID:24778601
Satellite Monitoring of Long-Range Transport of Asian Dust Storms from Sources to Sinks
NASA Astrophysics Data System (ADS)
Hsu, N.; Tsay, S.; Jeong, M.; King, M.; Holben, B.
2007-05-01
Among the many components that contribute to air pollution, airborne mineral dust plays an important role due to its biogeochemical impact on the ecosystem and its radiative-forcing effect on the climate system. In East Asia, dust storms frequently accompany the cold and dry air masses that occur as part of spring-time cold front systems. China's capital, Beijing, and other large cities are on the primary pathway of these dust storm plumes, and their passage over such popu-lation centers causes flight delays, pushes grit through windows and doors, and forces people indoors. Furthermore, during the spring these anthropogenic and natural air pollutants, once generated over the source regions, can be transported out of the boundary layer into the free troposphere and can travel thousands of kilometers across the Pacific into the United States and beyond. In this paper, we will demonstrate the capability of a new satellite algorithm to retrieve aerosol optical thickness and single scattering albedo over bright-reflecting surfaces such as urban areas and deserts. Such retrievals have been dif-ficult to perform using previously available algorithms that use wavelengths from the mid-visible to the near IR because they have trouble separating the aerosol signal from the contribution due to the bright surface reflectance. The new algorithm, called Deep Blue, utilizes blue-wavelength measurements from instruments such as SeaWiFS and MODIS to infer the properties of aerosols, since the surface reflectance over land in the blue part of the spectrum is much lower than for longer wavelength channels. Deep Blue algorithm has recently been integrated into the MODIS processing stream and began to provide aerosol products over land as part of the opera-tional MYD04 products. In this talk, we will show the comparisons of the MODIS Deep Blue products with data from AERONET sunphotometers on a global ba-sis. The results indicate reasonable agreements between these two. These new satellite products will allow scientists to determine quantitatively the aerosol properties near sources and their evolution along transport pathway using high spatial resolution measurements from SeaWiFS and MODIS-like instruments. We will also utilize the multiyear satellite measurements from MODIS and SeaWiFS to investigate the interannual variability of source strength, pathway, and radia-tive forcing associated with these dust outbreaks in East Asia.
Holographic imaging of natural-fiber-containing materials
Bunch, Kyle J [Richland, WA; Tucker, Brian J [Pasco, WA; Severtsen, Ronald H [Richland, WA; Hall, Thomas E [Kennewick, WA; McMakin, Douglas L [Richland, WA; Lechelt, Wayne M [West Richland, WA; Griffin, Jeffrey W [Kennewick, WA; Sheen, David M [Richland, WA
2010-12-21
The present invention includes methods and apparatuses for imaging material properties in natural-fiber-containing materials. In particular, the images can provide quantified measures of localized moisture content. Embodiments of the invention utilize an array of antennas and at least one transceiver to collect amplitude and phase data from radiation interacting with the natural-fiber-containing materials. The antennas and the transceivers are configured to transmit and receive electromagnetic radiation at one or more frequencies, which are between 50 MHz and 1 THz. A conveyance system passes the natural-fiber-containing materials through a field of view of the array of antennas. A computing device is configured to apply a synthetic imaging algorithm to construct a three-dimensional image of the natural-fiber-containing materials that provides a quantified measure of localized moisture content. The image and the quantified measure are both based on the amplitude data, the phase data, or both.
Holographic imaging based on time-domain data of natural-fiber-containing materials
Bunch, Kyle J.; McMakin, Douglas L.
2012-09-04
Methods and apparatuses for imaging material properties in natural-fiber-containing materials can utilize time-domain data. In particular, images can be constructed that provide quantified measures of localized moisture content. For example, one or more antennas and at least one transceiver can be configured to collect time-domain data from radiation interacting with the natural-fiber-containing materials. The antennas and the transceivers are configured to transmit and receive electromagnetic radiation at one or more frequencies, which are between 50 MHz and 1 THz, according to a time-domain impulse function. A computing device is configured to transform the time-domain data to frequency-domain data, to apply a synthetic imaging algorithm for constructing a three-dimensional image of the natural-fiber-containing materials, and to provide a quantified measure of localized moisture content based on a pre-determined correlation of moisture content to frequency-domain data.
Piezoceramic Actuator Placement for Acoustic Control of Panels
NASA Technical Reports Server (NTRS)
Bevan, Jeffrey S.; Turner, Travis L. (Technical Monitor)
2001-01-01
Optimum placement of multiple traditional piezoceramic actuators is determined for active structural acoustic control of flat panels. The structural acoustic response is determined using acoustic radiation filters and structural surface vibration characteristics. Linear Quadratic Regulator (LQR) control is utilized to determine the optimum state feedback gain for active structural acoustic control. The optimum actuator location is determined by minimizing the structural acoustic radiated noise using a modified genetic algorithm. Experimental tests are conducted and compared to analytical results. Anisotropic piezoceramic actuators exhibits enhanced performance when compared to traditional isotropic piezoceramic actuators. As a result of the inherent isotropy, these advanced actuators develop strain along the principal material axis. The orientation of anisotropic actuators is investigated on the effect of structural vibration and acoustic control of curved and flat panels. A fully coupled shallow shell finite element formulation is developed to include anisotropic piezoceramic actuators for shell structures.
Piezoceramic Actuator Placement for Acoustic Control of Panels
NASA Technical Reports Server (NTRS)
Bevan, Jeffrey S.
2000-01-01
Optimum placement of multiple traditional piezoceramic actuators is determined for active structural acoustic control of flat panels. The structural acoustic response is determined using acoustic radiation filters and structural surface vibration characteristics. Linear Quadratic Regulator (LQR) control is utilized to determine the optimum state feedback gain for active structural acoustic control. The optimum actuator location is determined by minimizing the structural acoustic radiated noise using a modified genetic algorithm. Experimental tests are conducted and compared to analytical results. Anisotropic piezoceramic actuators exhibit enhanced performance when compared to traditional isotropic piezoceramic actuators. As a result of the inherent isotropy, these advanced actuators develop strain along the principal material axis. The orientation of anisotropic actuators is investigated on the effect of structural vibration and acoustic control of curved and flat panels. A fully coupled shallow shell finite element formulation is developed to include anisotropic piezoceramic actuators for shell structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swesty, F. Douglas; Myra, Eric S.
It is now generally agreed that multidimensional, multigroup, neutrino-radiation hydrodynamics (RHD) is an indispensable element of any realistic model of stellar-core collapse, core-collapse supernovae, and proto-neutron star instabilities. We have developed a new, two-dimensional, multigroup algorithm that can model neutrino-RHD flows in core-collapse supernovae. Our algorithm uses an approach similar to the ZEUS family of algorithms, originally developed by Stone and Norman. However, this completely new implementation extends that previous work in three significant ways: first, we incorporate multispecies, multigroup RHD in a flux-limited-diffusion approximation. Our approach is capable of modeling pair-coupled neutrino-RHD, and includes effects of Pauli blocking inmore » the collision integrals. Blocking gives rise to nonlinearities in the discretized radiation-transport equations, which we evolve implicitly in time. We employ parallelized Newton-Krylov methods to obtain a solution of these nonlinear, implicit equations. Our second major extension to the ZEUS algorithm is the inclusion of an electron conservation equation that describes the evolution of electron-number density in the hydrodynamic flow. This permits calculating deleptonization of a stellar core. Our third extension modifies the hydrodynamics algorithm to accommodate realistic, complex equations of state, including those having nonconvex behavior. In this paper, we present a description of our complete algorithm, giving sufficient details to allow others to implement, reproduce, and extend our work. Finite-differencing details are presented in appendices. We also discuss implementation of this algorithm on state-of-the-art, parallel-computing architectures. Finally, we present results of verification tests that demonstrate the numerical accuracy of this algorithm on diverse hydrodynamic, gravitational, radiation-transport, and RHD sample problems. We believe our methods to be of general use in a variety of model settings where radiation transport or RHD is important. Extension of this work to three spatial dimensions is straightforward.« less
Overview of CERES Cloud Properties Derived From VIRS AND MODIS DATA
NASA Technical Reports Server (NTRS)
Minis, Patrick; Geier, Erika; Wielicki, Bruce A.; Sun-Mack, Sunny; Chen, Yan; Trepte, Qing Z.; Dong, Xiquan; Doelling, David R.; Ayers, J. Kirk; Khaiyer, Mandana M.
2006-01-01
Simultaneous measurement of radiation and cloud fields on a global basis is recognized as a key component in understanding and modeling the interaction between clouds and radiation at the top of the atmosphere, at the surface, and within the atmosphere. The NASA Clouds and Earth s Radiant Energy System (CERES) Project (Wielicki et al., 1998) began addressing this issue in 1998 with its first broadband shortwave and longwave scanner on the Tropical Rainfall Measuring Mission (TRMM). This was followed by the launch of two CERES scanners each on Terra and Aqua during late 1999 and early 2002, respectively. When combined, these satellites should provide the most comprehensive global characterization of clouds and radiation to date. Unfortunately, the TRMM scanner failed during late 1998. The Terra and Aqua scanners continue to operate, however, providing measurements at a minimum of 4 local times each day. CERES was designed to scan in tandem with high resolution imagers so that the cloud conditions could be evaluated for every CERES measurement. The cloud properties are essential for converting CERES radiances shortwave albedo and longwave fluxes needed to define the radiation budget (ERB). They are also needed to unravel the impact of clouds on the ERB. The 5-channel, 2-km Visible Infrared Scanner (VIRS) on the TRMM and the 36-channel 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra and Aqua are analyzed to define the cloud properties for each CERES footprint. To minimize inter-satellite differences and aid the development of useful climate-scale measurements, it was necessary to ensure that each satellite imager is calibrated in a fashion consistent with its counterpart on the other CERES satellites (Minnis et al., 2006) and that the algorithms are as similar as possible for all of the imagers. Thus, a set of cloud detection and retrieval algorithms were developed that could be applied to all three imagers utilizing as few channels as possible while producing stable and accurate cloud properties. This paper discusses the algorithms and results of applying those techniques to more than 5 years of Terra MODIS, 3 years of Aqua MODIS, and 4 years of TRMM VIRS data.
NASA Astrophysics Data System (ADS)
Takenaka, H.; Teruyuki, N.; Nakajima, T. Y.; Higurashi, A.; Hashimoto, M.; Suzuki, K.; Uchida, J.; Nagao, T. M.; Shi, C.; Inoue, T.
2017-12-01
It is important to estimate the earth's radiation budget accurately for understanding of climate. Clouds can cool the Earth by reflecting solar radiation but also maintain warmth by absorbing and emitting terrestrial radiation. similarly aerosols also have an effect on radiation budget by absorption and scattering of Solar radiation. In this study, we developed the high speed and accurate algorithm for shortwave (SW) radiation budget and it's applied to geostationary satellite for rapid analysis. It enabled highly accurate monitoring of solar radiation and photo voltaic (PV) power generation. Next step, we try to update the algorithm for retrieval of Aerosols and Clouds. It indicates the accurate atmospheric parameters for estimation of solar radiation. (This research was supported in part by CREST/EMS).
NASA Astrophysics Data System (ADS)
Boley, Aaron C.; Durisen, Richard H.; Nordlund, Åke; Lord, Jesse
2007-08-01
Recent three-dimensional radiative hydrodynamics simulations of protoplanetary disks report disparate disk behaviors, and these differences involve the importance of convection to disk cooling, the dependence of disk cooling on metallicity, and the stability of disks against fragmentation and clump formation. To guarantee trustworthy results, a radiative physics algorithm must demonstrate the capability to handle both the high and low optical depth regimes. We develop a test suite that can be used to demonstrate an algorithm's ability to relax to known analytic flux and temperature distributions, to follow a contracting slab, and to inhibit or permit convection appropriately. We then show that the radiative algorithm employed by Mejía and Boley et al. and the algorithm employed by Cai et al. pass these tests with reasonable accuracy. In addition, we discuss a new algorithm that couples flux-limited diffusion with vertical rays, we apply the test suite, and we discuss the results of evolving the Boley et al. disk with this new routine. Although the outcome is significantly different in detail with the new algorithm, we obtain the same qualitative answers. Our disk does not cool fast due to convection, and it is stable to fragmentation. We find an effective α~10-2. In addition, transport is dominated by low-order modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Cardona, Daniel; Nagle, Scott K.; Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, 600 Highland Avenue, Madison, Wisconsin 53792
Purpose: Wall thickness (WT) is an airway feature of great interest for the assessment of morphological changes in the lung parenchyma. Multidetector computed tomography (MDCT) has recently been used to evaluate airway WT, but the potential risk of radiation-induced carcinogenesis—particularly in younger patients—might limit a wider use of this imaging method in clinical practice. The recent commercial implementation of the statistical model-based iterative reconstruction (MBIR) algorithm, instead of the conventional filtered back projection (FBP) algorithm, has enabled considerable radiation dose reduction in many other clinical applications of MDCT. The purpose of this work was to study the impact of radiationmore » dose and MBIR in the MDCT assessment of airway WT. Methods: An airway phantom was scanned using a clinical MDCT system (Discovery CT750 HD, GE Healthcare) at 4 kV levels and 5 mAs levels. Both FBP and a commercial implementation of MBIR (Veo{sup TM}, GE Healthcare) were used to reconstruct CT images of the airways. For each kV–mAs combination and each reconstruction algorithm, the contrast-to-noise ratio (CNR) of the airways was measured, and the WT of each airway was measured and compared with the nominal value; the relative bias and the angular standard deviation in the measured WT were calculated. For each airway and reconstruction algorithm, the overall performance of WT quantification across all of the 20 kV–mAs combinations was quantified by the sum of squares (SSQs) of the difference between the measured and nominal WT values. Finally, the particular kV–mAs combination and reconstruction algorithm that minimized radiation dose while still achieving a reference WT quantification accuracy level was chosen as the optimal acquisition and reconstruction settings. Results: The wall thicknesses of seven airways of different sizes were analyzed in the study. Compared with FBP, MBIR improved the CNR of the airways, particularly at low radiation dose levels. For FBP, the relative bias and the angular standard deviation of the measured WT increased steeply with decreasing radiation dose. Except for the smallest airway, MBIR enabled significant reduction in both the relative bias and angular standard deviation of the WT, particularly at low radiation dose levels; the SSQ was reduced by 50%–96% by using MBIR. The optimal reconstruction algorithm was found to be MBIR for the seven airways being assessed, and the combined use of MBIR and optimal kV–mAs selection resulted in a radiation dose reduction of 37%–83% compared with a reference scan protocol with a dose level of 1 mGy. Conclusions: The quantification accuracy of airway WT is strongly influenced by radiation dose and reconstruction algorithm. The MBIR algorithm potentially allows the desired WT quantification accuracy to be achieved with reduced radiation dose, which may enable a wider clinical use of MDCT for the assessment of airway WT, particularly for younger patients who may be more sensitive to exposures with ionizing radiation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, B; Georgia Institute of Technology, Atlanta, GA; Wang, C
Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities.more » These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell survival curves for high-LET radiation.« less
NASA Astrophysics Data System (ADS)
Braiek, A.; Adili, A.; Albouchi, F.; Karkri, M.; Ben Nasrallah, S.
2016-06-01
The aim of this work is to simultaneously identify the conductive and radiative parameters of a semitransparent sample using a photothermal method associated with an inverse problem. The identification of the conductive and radiative proprieties is performed by the minimization of an objective function that represents the errors between calculated temperature and measured signal. The calculated temperature is obtained from a theoretical model built with the thermal quadrupole formalism. Measurement is obtained in the rear face of the sample whose front face is excited by a crenel of heat flux. For identification procedure, a genetic algorithm is developed and used. The genetic algorithm is a useful tool in the simultaneous estimation of correlated or nearly correlated parameters, which can be a limiting factor for the gradient-based methods. The results of the identification procedure show the efficiency and the stability of the genetic algorithm to simultaneously estimate the conductive and radiative properties of clear glass.
A data-driven multi-model methodology with deep feature selection for short-term wind forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias
With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less
MOD3D: a model for incorporating MODTRAN radiative transfer into 3D simulations
NASA Astrophysics Data System (ADS)
Berk, Alexander; Anderson, Gail P.; Gossage, Brett N.
2001-08-01
MOD3D, a rapid and accurate radiative transport algorithm, is being developed for application to 3D simulations. MOD3D couples to optical property databases generated by the MODTRAN4 Correlated-k (CK) band model algorithm. The Beer's Law dependence of the CK algorithm provides for proper coupling of illumination and line-of-sight paths. Full 3D spatial effects are modeled by scaling and interpolating optical data to local conditions. A C++ version of MOD3D has been integrated into JMASS for calculation of path transmittances, thermal emission and single scatter solar radiation. Results from initial validation efforts are presented.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fahimian, Benjamin P.; Zhao Yunzhe; Huang Zhifeng
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). Inmore » each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.« less
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction.
Fahimian, Benjamin P; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J; Osher, Stanley J; McNitt-Gray, Michael F; Miao, Jianwei
2013-03-01
A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method.
Radiation dose reduction in medical x-ray CT via Fourier-based iterative reconstruction
Fahimian, Benjamin P.; Zhao, Yunzhe; Huang, Zhifeng; Fung, Russell; Mao, Yu; Zhu, Chun; Khatonabadi, Maryam; DeMarco, John J.; Osher, Stanley J.; McNitt-Gray, Michael F.; Miao, Jianwei
2013-01-01
Purpose: A Fourier-based iterative reconstruction technique, termed Equally Sloped Tomography (EST), is developed in conjunction with advanced mathematical regularization to investigate radiation dose reduction in x-ray CT. The method is experimentally implemented on fan-beam CT and evaluated as a function of imaging dose on a series of image quality phantoms and anonymous pediatric patient data sets. Numerical simulation experiments are also performed to explore the extension of EST to helical cone-beam geometry. Methods: EST is a Fourier based iterative algorithm, which iterates back and forth between real and Fourier space utilizing the algebraically exact pseudopolar fast Fourier transform (PPFFT). In each iteration, physical constraints and mathematical regularization are applied in real space, while the measured data are enforced in Fourier space. The algorithm is automatically terminated when a proposed termination criterion is met. Experimentally, fan-beam projections were acquired by the Siemens z-flying focal spot technology, and subsequently interleaved and rebinned to a pseudopolar grid. Image quality phantoms were scanned at systematically varied mAs settings, reconstructed by EST and conventional reconstruction methods such as filtered back projection (FBP), and quantified using metrics including resolution, signal-to-noise ratios (SNRs), and contrast-to-noise ratios (CNRs). Pediatric data sets were reconstructed at their original acquisition settings and additionally simulated to lower dose settings for comparison and evaluation of the potential for radiation dose reduction. Numerical experiments were conducted to quantify EST and other iterative methods in terms of image quality and computation time. The extension of EST to helical cone-beam CT was implemented by using the advanced single-slice rebinning (ASSR) method. Results: Based on the phantom and pediatric patient fan-beam CT data, it is demonstrated that EST reconstructions with the lowest scanner flux setting of 39 mAs produce comparable image quality, resolution, and contrast relative to FBP with the 140 mAs flux setting. Compared to the algebraic reconstruction technique and the expectation maximization statistical reconstruction algorithm, a significant reduction in computation time is achieved with EST. Finally, numerical experiments on helical cone-beam CT data suggest that the combination of EST and ASSR produces reconstructions with higher image quality and lower noise than the Feldkamp Davis and Kress (FDK) method and the conventional ASSR approach. Conclusions: A Fourier-based iterative method has been applied to the reconstruction of fan-bean CT data with reduced x-ray fluence. This method incorporates advantageous features in both real and Fourier space iterative schemes: using a fast and algebraically exact method to calculate forward projection, enforcing the measured data in Fourier space, and applying physical constraints and flexible regularization in real space. Our results suggest that EST can be utilized for radiation dose reduction in x-ray CT via the readily implementable technique of lowering mAs settings. Numerical experiments further indicate that EST requires less computation time than several other iterative algorithms and can, in principle, be extended to helical cone-beam geometry in combination with the ASSR method. PMID:23464329
Surface Downward Longwave Radiation Retrieval Algorithm for GEO-KOMPSAT-2A/AMI
NASA Astrophysics Data System (ADS)
Ahn, Seo-Hee; Lee, Kyu-Tae; Rim, Se-Hun; Zo, Il-Sung; Kim, Bu-Yo
2018-05-01
This study contributes to the development of an algorithm to retrieve the Earth's surface downward longwave radiation (DLR) for 2nd Geostationary Earth Orbit KOrea Multi-Purpose SATellite (GEO-KOMPSAT-2A; GK-2A)/Advanced Meteorological Imager (AMI). Regarding simulation data for algorithm development, we referred to Clouds and the Earth's Radiant Energy System (CERES), and the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-interim reanalysis data. The clear sky DLR calculations were in good agreement with the Gangneung-Wonju National University (GWNU) Line-By-Line (LBL) model. Compared with CERES data, the Root Mean Square Error (RMSE) was 10.14Wm-2. In the case of cloudy sky DLR, we estimated the cloud base temperature empirically by utilizing cloud liquid water content (LWC) according to the cloud type. As a result, the correlation coefficients with CERES all sky DLRs were greater than 0.99. However, the RMSE between calculated DLR and CERES data was about 16.67Wm-2, due to ice clouds and problems of mismatched spatial and temporal resolutions for input data. This error may be reduced when GK-2A is launched and its products can be used as input data. Accordingly, further study is needed to improve the accuracy of DLR calculation by using high-resolution input data. In addition, when compared with BSRN surface-based observational data and retrieved DLR for all sky, the correlation coefficient was 0.86 and the RMSE was 31.55 Wm-2, which indicates relatively high accuracy. It is expected that increasing the number of experimental Cases will reduce the error.
Akberov, R F; Gorshkov, A N
1997-01-01
The X-ray endoscopic semiotics of precancerous gastric mucosal changes (epithelial dysplasia, intestinal epithelial rearrangement) was examined by the results of 1574 gastric examination. A diagnostic algorithm was developed for radiation studies in the diagnosis of the above pathology.
Three numerical algorithms were compared to provide a solution of a radiative transfer equation (RTE) for plane albedo (hemispherical reflectance) in semi-infinite one-dimensional plane-parallel layer. Algorithms were based on the invariant imbedding method and two different var...
NASA Astrophysics Data System (ADS)
Yeh, Peter C. Y.; Lee, C. C.; Chao, T. C.; Tung, C. J.
2017-11-01
Intensity-modulated radiation therapy is an effective treatment modality for the nasopharyngeal carcinoma. One important aspect of this cancer treatment is the need to have an accurate dose algorithm dealing with the complex air/bone/tissue interface in the head-neck region to achieve the cure without radiation-induced toxicities. The Acuros XB algorithm explicitly solves the linear Boltzmann transport equation in voxelized volumes to account for the tissue heterogeneities such as lungs, bone, air, and soft tissues in the treatment field receiving radiotherapy. With the single beam setup in phantoms, this algorithm has already been demonstrated to achieve the comparable accuracy with Monte Carlo simulations. In the present study, five nasopharyngeal carcinoma patients treated with the intensity-modulated radiation therapy were examined for their dose distributions calculated using the Acuros XB in the planning target volume and the organ-at-risk. Corresponding results of Monte Carlo simulations were computed from the electronic portal image data and the BEAMnrc/DOSXYZnrc code. Analysis of dose distributions in terms of the clinical indices indicated that the Acuros XB was in comparable accuracy with Monte Carlo simulations and better than the anisotropic analytical algorithm for dose calculations in real patients.
Algorithm for Surface of Translation Attached Radiators (A-STAR). Volume 2. Users manual
NASA Astrophysics Data System (ADS)
Medgyesimitschang, L. N.; Putnam, J. M.
1982-05-01
A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation from off-surface and aperture antennas on finite-length open or closed bodies of arbitrary cross section. The near fields and antenna coupling on such bodies are computed. The theoretical development underlying the algorithm is described in Volume 1 of this report.
Algorithms imaging tests comparison following the first febrile urinary tract infection in children.
Tombesi, María M; Alconcher, Laura F; Lucarelli, Lucas; Ciccioli, Agustina
2017-08-01
To compare the diagnostic sensitivity, costs and radiation doses of imaging tests algorithms developed by the Argentine Society of Pediatrics in 2003 and 2015, against British and American guidelines after the first febrile urinary tract infection (UTI). Inclusion criteria: children ≤ 2 years old with their first febrile UTI and normal ultrasound, voiding cystourethrography and dimercaptosuccinic acid scintigraphy, according to the algorithm established by the Argentine Society of Pediatrics in 2003, treated between 2003 and 2010. The comparisons between algorithms were carried out through retrospective simulation. Eighty (80) patients met the inclusion criteria; 51 (63%) had vesicoureteral reflux (VUR); 6% of the cases were severe. Renal scarring was observed in 6 patients (7.5%). Cost: ARS 404,000. Radiation: 160 millisieverts. With the Argentine Society of Pediatrics' algorithm developed in 2015, the diagnosis of 4 VURs and 2 cases of renal scarring would have been missed. The cost of this omission would have been ARS 301,800 and 124 millisieverts of radiation. British and American guidelines would have missed the diagnosis of all VURs and all cases of renal scarring, with a related cost of ARS 23,000 and ARS 40,000, respectively and 0 radiation. Intensive protocols are highly sensitive to VUR and renal scarring, but they imply high costs and doses of radiation, and result in questionable benefits. Sociedad Argentina de Pediatría
Hyperspectrally-Resolved Surface Emissivity Derived Under Optically Thin Clouds
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping
2010-01-01
Surface spectral emissivity derived from current and future satellites can and will reveal critical information about the Earth s ecosystem and land surface type properties, which can be utilized as a means of long-term monitoring of global environment and climate change. Hyperspectrally-resolved surface emissivities are derived with an algorithm utilizes a combined fast radiative transfer model (RTM) with a molecular RTM and a cloud RTM accounting for both atmospheric absorption and cloud absorption/scattering. Clouds are automatically detected and cloud microphysical parameters are retrieved; and emissivity is retrieved under clear and optically thin cloud conditions. This technique separates surface emissivity from skin temperature by representing the emissivity spectrum with eigenvectors derived from a laboratory measured emissivity database; in other words, using the constraint as a means for the emissivity to vary smoothly across atmospheric absorption lines. Here we present the emissivity derived under optically thin clouds in comparison with that under clear conditions.
Implementation of Adaptive Digital Controllers on Programmable Logic Devices
NASA Technical Reports Server (NTRS)
Gwaltney, David A.; King, Kenneth D.; Smith, Keary J.; Ormsby, John (Technical Monitor)
2002-01-01
Much has been made of the capabilities of FPGA's (Field Programmable Gate Arrays) in the hardware implementation of fast digital signal processing (DSP) functions. Such capability also makes and FPGA a suitable platform for the digital implementation of closed loop controllers. There are myriad advantages to utilizing an FPGA for discrete-time control functions which include the capability for reconfiguration when SRAM- based FPGA's are employed, fast parallel implementation of multiple control loops and implementations that can meet space level radiation tolerance in a compact form-factor. Other researchers have presented the notion that a second order digital filter with proportional-integral-derivative (PID) control functionality can be implemented in an FPGA. At Marshall Space Flight Center, the Control Electronics Group has been studying adaptive discrete-time control of motor driven actuator systems using digital signal processor (DSF) devices. Our goal is to create a fully digital, flight ready controller design that utilizes an FPGA for implementation of signal conditioning for control feedback signals, generation of commands to the controlled system, and hardware insertion of adaptive control algorithm approaches. While small form factor, commercial DSP devices are now available with event capture, data conversion, pulse width modulated outputs and communication peripherals, these devices are not currently available in designs and packages which meet space level radiation requirements. Meeting our goals requires alternative compact implementation of such functionality to withstand the harsh environment encountered on spacecraft. Radiation tolerant FPGA's are a feasible option for reaching these goals.
Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao
2014-01-01
Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.
NASA Astrophysics Data System (ADS)
Chen, Xiang; Li, Jingchao; Han, Hui; Ying, Yulong
2018-05-01
Because of the limitations of the traditional fractal box-counting dimension algorithm in subtle feature extraction of radiation source signals, a dual improved generalized fractal box-counting dimension eigenvector algorithm is proposed. First, the radiation source signal was preprocessed, and a Hilbert transform was performed to obtain the instantaneous amplitude of the signal. Then, the improved fractal box-counting dimension of the signal instantaneous amplitude was extracted as the first eigenvector. At the same time, the improved fractal box-counting dimension of the signal without the Hilbert transform was extracted as the second eigenvector. Finally, the dual improved fractal box-counting dimension eigenvectors formed the multi-dimensional eigenvectors as signal subtle features, which were used for radiation source signal recognition by the grey relation algorithm. The experimental results show that, compared with the traditional fractal box-counting dimension algorithm and the single improved fractal box-counting dimension algorithm, the proposed dual improved fractal box-counting dimension algorithm can better extract the signal subtle distribution characteristics under different reconstruction phase space, and has a better recognition effect with good real-time performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niemkiewicz, J; Palmiotti, A; Miner, M
2014-06-01
Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU valuesmore » were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation treatment planning accuracy.« less
Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian
2016-06-18
In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.
3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion
Dou, Qingxu; Wei, Lijun; Magee, Derek R.; Atkins, Phil R.; Chapman, David N.; Curioni, Giulio; Goddard, Kevin F.; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R.; Rustighi, Emiliano; Swingler, Steven G.; Rogers, Christopher D. F.; Cohn, Anthony G.
2016-01-01
We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed “multi-utility multi-sensor” system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation. PMID:27827836
3D Buried Utility Location Using A Marching-Cross-Section Algorithm for Multi-Sensor Data Fusion.
Dou, Qingxu; Wei, Lijun; Magee, Derek R; Atkins, Phil R; Chapman, David N; Curioni, Giulio; Goddard, Kevin F; Hayati, Farzad; Jenks, Hugo; Metje, Nicole; Muggleton, Jennifer; Pennock, Steve R; Rustighi, Emiliano; Swingler, Steven G; Rogers, Christopher D F; Cohn, Anthony G
2016-11-02
We address the problem of accurately locating buried utility segments by fusing data from multiple sensors using a novel Marching-Cross-Section (MCS) algorithm. Five types of sensors are used in this work: Ground Penetrating Radar (GPR), Passive Magnetic Fields (PMF), Magnetic Gradiometer (MG), Low Frequency Electromagnetic Fields (LFEM) and Vibro-Acoustics (VA). As part of the MCS algorithm, a novel formulation of the extended Kalman Filter (EKF) is proposed for marching existing utility tracks from a scan cross-section (scs) to the next one; novel rules for initializing utilities based on hypothesized detections on the first scs and for associating predicted utility tracks with hypothesized detections in the following scss are introduced. Algorithms are proposed for generating virtual scan lines based on given hypothesized detections when different sensors do not share common scan lines, or when only the coordinates of the hypothesized detections are provided without any information of the actual survey scan lines. The performance of the proposed system is evaluated with both synthetic data and real data. The experimental results in this work demonstrate that the proposed MCS algorithm can locate multiple buried utility segments simultaneously, including both straight and curved utilities, and can separate intersecting segments. By using the probabilities of a hypothesized detection being a pipe or a cable together with its 3D coordinates, the MCS algorithm is able to discriminate a pipe and a cable close to each other. The MCS algorithm can be used for both post- and on-site processing. When it is used on site, the detected tracks on the current scs can help to determine the location and direction of the next scan line. The proposed "multi-utility multi-sensor" system has no limit to the number of buried utilities or the number of sensors, and the more sensor data used, the more buried utility segments can be detected with more accurate location and orientation.
Implementation of Adaptive Digital Controllers on Programmable Logic Devices
NASA Technical Reports Server (NTRS)
Gwaltney, David A.; King, Kenneth D.; Smith, Keary J.; Monenegro, Justino (Technical Monitor)
2002-01-01
Much has been made of the capabilities of FPGA's (Field Programmable Gate Arrays) in the hardware implementation of fast digital signal processing. Such capability also makes an FPGA a suitable platform for the digital implementation of closed loop controllers. Other researchers have implemented a variety of closed-loop digital controllers on FPGA's. Some of these controllers include the widely used proportional-integral-derivative (PID) controller, state space controllers, neural network and fuzzy logic based controllers. There are myriad advantages to utilizing an FPGA for discrete-time control functions which include the capability for reconfiguration when SRAM-based FPGA's are employed, fast parallel implementation of multiple control loops and implementations that can meet space level radiation tolerance requirements in a compact form-factor. Generally, a software implementation on a DSP (Digital Signal Processor) or microcontroller is used to implement digital controllers. At Marshall Space Flight Center, the Control Electronics Group has been studying adaptive discrete-time control of motor driven actuator systems using digital signal processor (DSP) devices. While small form factor, commercial DSP devices are now available with event capture, data conversion, pulse width modulated (PWM) outputs and communication peripherals, these devices are not currently available in designs and packages which meet space level radiation requirements. In general, very few DSP devices are produced that are designed to meet any level of radiation tolerance or hardness. The goal of this effort is to create a fully digital, flight ready controller design that utilizes an FPGA for implementation of signal conditioning for control feedback signals, generation of commands to the controlled system, and hardware insertion of adaptive control algorithm approaches. An alternative is required for compact implementation of such functionality to withstand the harsh environment encountered on spacecraft. Radiation tolerant FPGA's are a feasible option for reaching this goal.
Implementation of Adaptive Digital Controllers on Programmable Logic Devices
NASA Technical Reports Server (NTRS)
Gwaltney, David A.; King, Kenneth D.; Smith, Keary J.; Montenegro, Justino (Technical Monitor)
2002-01-01
Much has been made of the capabilities of Field Programmable Gate Arrays (FPGA's) in the hardware implementation of fast digital signal processing functions. Such capability also makes an FPGA a suitable platform for the digital implementation of closed loop controllers. Other researchers have implemented a variety of closed-loop digital controllers on FPGA's. Some of these controllers include the widely used Proportional-Integral-Derivative (PID) controller, state space controllers, neural network and fuzzy logic based controllers. There are myriad advantages to utilizing an FPGA for discrete-time control functions which include the capability for reconfiguration when SRAM- based FPGA's are employed, fast parallel implementation of multiple control loops and implementations that can meet space level radiation tolerance requirements in a compact form-factor. Generally, a software implementation on a Digital Signal Processor (DSP) device or microcontroller is used to implement digital controllers. At Marshall Space Flight Center, the Control Electronics Group has been studying adaptive discrete-time control of motor driven actuator systems using DSP devices. While small form factor, commercial DSP devices are now available with event capture, data conversion, Pulse Width Modulated (PWM) outputs and communication peripherals, these devices are not currently available in designs and packages which meet space level radiation requirements. In general, very few DSP devices are produced that are designed to meet any level of radiation tolerance or hardness. An alternative is required for compact implementation of such functionality to withstand the harsh environment encountered on spacemap. The goal of this effort is to create a fully digital, flight ready controller design that utilizes an FPGA for implementation of signal conditioning for control feedback signals, generation of commands to the controlled system, and hardware insertion of adaptive-control algorithm approaches. Radiation tolerant FPGA's are a feasible option for reaching this goal.
Application of Multivariate Modeling for Radiation Injury Assessment: A Proof of Concept
Bolduc, David L.; Villa, Vilmar; Sandgren, David J.; Ledney, G. David; Blakely, William F.; Bünger, Rolf
2014-01-01
Multivariate radiation injury estimation algorithms were formulated for estimating severe hematopoietic acute radiation syndrome (H-ARS) injury (i.e., response category three or RC3) in a rhesus monkey total-body irradiation (TBI) model. Classical CBC and serum chemistry blood parameters were examined prior to irradiation (d 0) and on d 7, 10, 14, 21, and 25 after irradiation involving 24 nonhuman primates (NHP) (Macaca mulatta) given 6.5-Gy 60Co Υ-rays (0.4 Gy min−1) TBI. A correlation matrix was formulated with the RC3 severity level designated as the “dependent variable” and independent variables down selected based on their radioresponsiveness and relatively low multicollinearity using stepwise-linear regression analyses. Final candidate independent variables included CBC counts (absolute number of neutrophils, lymphocytes, and platelets) in formulating the “CBC” RC3 estimation algorithm. Additionally, the formulation of a diagnostic CBC and serum chemistry “CBC-SCHEM” RC3 algorithm expanded upon the CBC algorithm model with the addition of hematocrit and the serum enzyme levels of aspartate aminotransferase, creatine kinase, and lactate dehydrogenase. Both algorithms estimated RC3 with over 90% predictive power. Only the CBC-SCHEM RC3 algorithm, however, met the critical three assumptions of linear least squares demonstrating slightly greater precision for radiation injury estimation, but with significantly decreased prediction error indicating increased statistical robustness. PMID:25165485
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staschus, K.
1985-01-01
In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms aremore » reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.« less
Hybrid protection algorithms based on game theory in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Guo, Lei; Wu, Jingjing; Hou, Weigang; Liu, Yejun; Zhang, Lincong; Li, Hongming
2011-12-01
With the network size increasing, the optical backbone is divided into multiple domains and each domain has its own network operator and management policy. At the same time, the failures in optical network may lead to a huge data loss since each wavelength carries a lot of traffic. Therefore, the survivability in multi-domain optical network is very important. However, existing survivable algorithms can achieve only the unilateral optimization for profit of either users or network operators. Then, they cannot well find the double-win optimal solution with considering economic factors for both users and network operators. Thus, in this paper we develop the multi-domain network model with involving multiple Quality of Service (QoS) parameters. After presenting the link evaluation approach based on fuzzy mathematics, we propose the game model to find the optimal solution to maximize the user's utility, the network operator's utility, and the joint utility of user and network operator. Since the problem of finding double-win optimal solution is NP-complete, we propose two new hybrid protection algorithms, Intra-domain Sub-path Protection (ISP) algorithm and Inter-domain End-to-end Protection (IEP) algorithm. In ISP and IEP, the hybrid protection means that the intelligent algorithm based on Bacterial Colony Optimization (BCO) and the heuristic algorithm are used to solve the survivability in intra-domain routing and inter-domain routing, respectively. Simulation results show that ISP and IEP have the similar comprehensive utility. In addition, ISP has better resource utilization efficiency, lower blocking probability, and higher network operator's utility, while IEP has better user's utility.
NASA Astrophysics Data System (ADS)
Zhao, Lei; Lee, Xuhui; Liu, Shoudong
2013-09-01
Solar radiation at the Earth's surface is an important driver of meteorological and ecological processes. The objective of this study is to evaluate the accuracy of the reanalysis solar radiation produced by NARR (North American Regional Reanalysis) and MERRA (Modern-Era Retrospective Analysis for Research and Applications) against the FLUXNET measurements in North America. We found that both assimilation systems systematically overestimated the surface solar radiation flux on the monthly and annual scale, with an average bias error of +37.2 Wm-2 for NARR and of +20.2 Wm-2 for MERRA. The bias errors were larger under cloudy skies than under clear skies. A postreanalysis algorithm consisting of empirical relationships between model bias, a clearness index, and site elevation was proposed to correct the model errors. Results show that the algorithm can remove the systematic bias errors for both FLUXNET calibration sites (sites used to establish the algorithm) and independent validation sites. After correction, the average annual mean bias errors were reduced to +1.3 Wm-2 for NARR and +2.7 Wm-2 for MERRA. Applying the correction algorithm to the global domain of MERRA brought the global mean surface incoming shortwave radiation down by 17.3 W m-2 to 175.5 W m-2. Under the constraint of the energy balance, other radiation and energy balance terms at the Earth's surface, estimated from independent global data products, also support the need for a downward adjustment of the MERRA surface solar radiation.
Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman
2018-01-17
The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were found to have excellent agreement with the reference data. Also, the unfolded energy spectra of the neutron sources as obtained using ANFIS were more accurate than the results reported from calculations performed using artificial neural networks in previously published papers. © The Author(s) 2018. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Unstructured Polyhedral Mesh Thermal Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, T.S.; Zika, M.R.; Madsen, N.K.
2000-07-27
Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.
NPP ATMS Snowfall Rate Product
NASA Technical Reports Server (NTRS)
Meng, Huan; Ferraro, Ralph; Kongoli, Cezar; Wang, Nai-Yu; Dong, Jun; Zavodsky, Bradley; Yan, Banghua
2015-01-01
Passive microwave measurements at certain high frequencies are sensitive to the scattering effect of snow particles and can be utilized to retrieve snowfall properties. Some of the microwave sensors with snowfall sensitive channels are Advanced Microwave Sounding Unit (AMSU), Microwave Humidity Sounder (MHS) and Advance Technology Microwave Sounder (ATMS). ATMS is the follow-on sensor to AMSU and MHS. Currently, an AMSU and MHS based land snowfall rate (SFR) product is running operationally at NOAA/NESDIS. Based on the AMSU/MHS SFR, an ATMS SFR algorithm has been developed recently. The algorithm performs retrieval in three steps: snowfall detection, retrieval of cloud properties, and estimation of snow particle terminal velocity and snowfall rate. The snowfall detection component utilizes principal component analysis and a logistic regression model. The model employs a combination of temperature and water vapor sounding channels to detect the scattering signal from falling snow and derive the probability of snowfall (Kongoli et al., 2015). In addition, a set of NWP model based filters is also employed to improve the accuracy of snowfall detection. Cloud properties are retrieved using an inversion method with an iteration algorithm and a two-stream radiative transfer model (Yan et al., 2008). A method developed by Heymsfield and Westbrook (2010) is adopted to calculate snow particle terminal velocity. Finally, snowfall rate is computed by numerically solving a complex integral. NCEP CMORPH analysis has shown that integration of ATMS SFR has improved the performance of CMORPH-Snow. The ATMS SFR product is also being assessed at several NWS Weather Forecast Offices for its usefulness in weather forecast.
Genetic algorithm driven spectral shaping of supercontinuum radiation in a photonic crystal fiber
NASA Astrophysics Data System (ADS)
Michaeli, Linor; Bahabad, Alon
2018-05-01
We employ a genetic algorithm to control a pulse-shaping system pumping a nonlinear photonic crystal with ultrashort pulses. With this system, we are able to modify the spectrum of the generated supercontinuum (SC) radiation to yield narrow Gaussian-like features around pre-selected wavelengths over the whole SC spectrum.
NASA Astrophysics Data System (ADS)
Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai
2016-03-01
Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).
Radiation anomaly detection algorithms for field-acquired gamma energy spectra
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen
2015-08-01
The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.
Kim, Hyungjin; Park, Chang Min; Song, Yong Sub; Lee, Sang Min; Goo, Jin Mo
2014-05-01
To evaluate the influence of radiation dose settings and reconstruction algorithms on the measurement accuracy and reproducibility of semi-automated pulmonary nodule volumetry. CT scans were performed on a chest phantom containing various nodules (10 and 12mm; +100, -630 and -800HU) at 120kVp with tube current-time settings of 10, 20, 50, and 100mAs. Each CT was reconstructed using filtered back projection (FBP), iDose(4) and iterative model reconstruction (IMR). Semi-automated volumetry was performed by two radiologists using commercial volumetry software for nodules at each CT dataset. Noise, contrast-to-noise ratio and signal-to-noise ratio of CT images were also obtained. The absolute percentage measurement errors and differences were then calculated for volume and mass. The influence of radiation dose and reconstruction algorithm on measurement accuracy, reproducibility and objective image quality metrics was analyzed using generalized estimating equations. Measurement accuracy and reproducibility of nodule volume and mass were not significantly associated with CT radiation dose settings or reconstruction algorithms (p>0.05). Objective image quality metrics of CT images were superior in IMR than in FBP or iDose(4) at all radiation dose settings (p<0.05). Semi-automated nodule volumetry can be applied to low- or ultralow-dose chest CT with usage of a novel iterative reconstruction algorithm without losing measurement accuracy and reproducibility. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Solomon, Justin; Mileto, Achille; Nelson, Rendon C; Roy Choudhury, Kingshuk; Samei, Ehsan
2016-04-01
To determine if radiation dose and reconstruction algorithm affect the computer-based extraction and analysis of quantitative imaging features in lung nodules, liver lesions, and renal stones at multi-detector row computed tomography (CT). Retrospective analysis of data from a prospective, multicenter, HIPAA-compliant, institutional review board-approved clinical trial was performed by extracting 23 quantitative imaging features (size, shape, attenuation, edge sharpness, pixel value distribution, and texture) of lesions on multi-detector row CT images of 20 adult patients (14 men, six women; mean age, 63 years; range, 38-72 years) referred for known or suspected focal liver lesions, lung nodules, or kidney stones. Data were acquired between September 2011 and April 2012. All multi-detector row CT scans were performed at two different radiation dose levels; images were reconstructed with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) algorithms. A linear mixed-effects model was used to assess the effect of radiation dose and reconstruction algorithm on extracted features. Among the 23 imaging features assessed, radiation dose had a significant effect on five, three, and four of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Adaptive statistical iterative reconstruction had a significant effect on three, one, and one of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). MBIR reconstruction had a significant effect on nine, 11, and 15 of the features for liver lesions, lung nodules, and renal stones, respectively (P < .002 for all comparisons). Of note, the measured size of lung nodules and renal stones with MBIR was significantly different than those for the other two algorithms (P < .002 for all comparisons). Although lesion texture was significantly affected by the reconstruction algorithm used (average of 3.33 features affected by MBIR throughout lesion types; P < .002, for all comparisons), no significant effect of the radiation dose setting was observed for all but one of the texture features (P = .002-.998). Radiation dose settings and reconstruction algorithms affect the extraction and analysis of quantitative imaging features in lesions at multi-detector row CT.
Lunar Habitat Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.
Study of high speed complex number algorithms. [for determining antenna for field radiation patterns
NASA Technical Reports Server (NTRS)
Heisler, R.
1981-01-01
A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.
A New Fast Algorithm to Completely Account for Non-Lambertian Surface Reflection of The Earth
NASA Technical Reports Server (NTRS)
Qin, Wen-Han; Herman, Jay R.; Ahmad, Ziauddin; Einaudi, Franco (Technical Monitor)
2000-01-01
Surface bidirectional reflectance distribution function (BRDF) influences not only radiance just about the surface, but that emerging from the top of the atmosphere (TOA). In this study we propose a new, fast and accurate, algorithm CASBIR (correction for anisotropic surface bidirectional reflection) to account for such influences on radiance measured above TOA. This new algorithm is based on a 4-stream theory that separates the radiation field into direct and diffuse components in both upwelling and downwelling directions. This is important because the direct component accounts for a substantial portion of incident radiation under a clear sky, and the BRDF effect is strongest in the reflection of the direct radiation reaching the surface. The model is validated by comparison with a full-scale, vector radiation transfer model for the atmosphere-surface system. The result demonstrates that CASBIR performs very well (with overall relative difference of less than one percent) for all solar and viewing zenith and azimuth angles considered in wavelengths from ultraviolet to near-infrared over three typical, but very different surface types. Application of this algorithm includes both accounting for non-Lambertian surface scattering on the emergent radiation above TOA and a potential approach for surface BRDF retrieval from satellite measured radiance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardcastle, Nicholas, E-mail: nick.hardcastle@gmail.com; Centre for Medical Radiation Physics, University of Wollongong, Wollongong; Hofman, Michael S.
2015-09-01
Purpose: Measuring changes in lung perfusion resulting from radiation therapy dose requires registration of the functional imaging to the radiation therapy treatment planning scan. This study investigates registration accuracy and utility for positron emission tomography (PET)/computed tomography (CT) perfusion imaging in radiation therapy for non–small cell lung cancer. Methods: {sup 68}Ga 4-dimensional PET/CT ventilation-perfusion imaging was performed before, during, and after radiation therapy for 5 patients. Rigid registration and deformable image registration (DIR) using B-splines and Demons algorithms was performed with the CT data to obtain a deformation map between the functional images and planning CT. Contour propagation accuracy andmore » correspondence of anatomic features were used to assess registration accuracy. Wilcoxon signed-rank test was used to determine statistical significance. Changes in lung perfusion resulting from radiation therapy dose were calculated for each registration method for each patient and averaged over all patients. Results: With B-splines/Demons DIR, median distance to agreement between lung contours reduced modestly by 0.9/1.1 mm, 1.3/1.6 mm, and 1.3/1.6 mm for pretreatment, midtreatment, and posttreatment (P<.01 for all), and median Dice score between lung contours improved by 0.04/0.04, 0.05/0.05, and 0.05/0.05 for pretreatment, midtreatment, and posttreatment (P<.001 for all). Distance between anatomic features reduced with DIR by median 2.5 mm and 2.8 for pretreatment and midtreatment time points, respectively (P=.001) and 1.4 mm for posttreatment (P>.2). Poorer posttreatment results were likely caused by posttreatment pneumonitis and tumor regression. Up to 80% standardized uptake value loss in perfusion scans was observed. There was limited change in the loss in lung perfusion between registration methods; however, Demons resulted in larger interpatient variation compared with rigid and B-splines registration. Conclusions: DIR accuracy in the data sets studied was variable depending on anatomic changes resulting from radiation therapy; caution must be exercised when using DIR in regions of low contrast or radiation pneumonitis. Lung perfusion reduces with increasing radiation therapy dose; however, DIR did not translate into significant changes in dose–response assessment.« less
Review of TRMM/GPM Rainfall Algorithm Validation
NASA Technical Reports Server (NTRS)
Smith, Eric A.
2004-01-01
A review is presented concerning current progress on evaluation and validation of standard Tropical Rainfall Measuring Mission (TRMM) precipitation retrieval algorithms and the prospects for implementing an improved validation research program for the next generation Global Precipitation Measurement (GPM) Mission. All standard TRMM algorithms are physical in design, and are thus based on fundamental principles of microwave radiative transfer and its interaction with semi-detailed cloud microphysical constituents. They are evaluated for consistency and degree of equivalence with one another, as well as intercompared to radar-retrieved rainfall at TRMM's four main ground validation sites. Similarities and differences are interpreted in the context of the radiative and microphysical assumptions underpinning the algorithms. Results indicate that the current accuracies of the TRMM Version 6 algorithms are approximately 15% at zonal-averaged / monthly scales with precisions of approximately 25% for full resolution / instantaneous rain rate estimates (i.e., level 2 retrievals). Strengths and weaknesses of the TRMM validation approach are summarized. Because the dew of convergence of level 2 TRMM algorithms is being used as a guide for setting validation requirements for the GPM mission, it is important that the GPM algorithm validation program be improved to ensure concomitant improvement in the standard GPM retrieval algorithms. An overview of the GPM Mission's validation plan is provided including a description of a new type of physical validation model using an analytic 3-dimensional radiative transfer model.
Forward-looking Assimilation of MODIS-derived Snow Covered Area into a Land Surface Model
NASA Technical Reports Server (NTRS)
Zaitchik, Benjamin F.; Rodell, Matthew
2008-01-01
Snow cover over land has a significant impact on the surface radiation budget, turbulent energy fluxes to the atmosphere, and local hydrological fluxes. For this reason, inaccuracies in the representation of snow covered area (SCA) within a land surface model (LSM) can lead to substantial errors in both offline and coupled simulations. Data assimilation algorithms have the potential to address this problem. However, the assimilation of SCA observations is complicated by an information deficit in the observation SCA indicates only the presence or absence of snow, and not snow volume and by the fact that assimilated SCA observations can introduce inconsistencies with atmospheric forcing data, leading to non-physical artifacts in the local water balance. In this paper we present a novel assimilation algorithm that introduces MODIS SCA observations to the Noah LSM in global, uncoupled simulations. The algorithm utilizes observations from up to 72 hours ahead of the model simulation in order to correct against emerging errors in the simulation of snow cover while preserving the local hydrologic balance. This is accomplished by using future snow observations to adjust air temperature and, when necessary, precipitation within the LSM. In global, offline integrations, this new assimilation algorithm provided improved simulation of SCA and snow water equivalent relative to open loop integrations and integrations that used an earlier SCA assimilation algorithm. These improvements, in turn, influenced the simulation of surface water and energy fluxes both during the snow season and, in some regions, on into the following spring.
UV Reconstruction Algorithm And Diurnal Cycle Variability
NASA Astrophysics Data System (ADS)
Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara
2009-03-01
UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.
Comparison of optimization algorithms in intensity-modulated radiation therapy planning
NASA Astrophysics Data System (ADS)
Kendrick, Rachel
Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.
The Fire Locating and Modeling of Burning Emissions (FLAMBE) Project
NASA Astrophysics Data System (ADS)
Reid, J. S.; Prins, E. M.; Westphal, D.; Richardson, K.; Christopher, S.; Schmidt, C.; Theisen, M.; Eck, T.; Reid, E. A.
2001-12-01
The Fire Locating and Modeling of Burning Emissions (FLAMBE) project was initiated by NASA, the US Navy and NOAA to monitor biomass burning and burning emissions on a global scale. The idea behind the mission is to integrate remote sensing data with global and regional transport models in real time for the purpose of providing the scientific community with smoke and fire products for planning and research purposes. FLAMBE is currently utilizing real time satellite data from GOES satellites, fire products based on the Wildfire Automated Biomass Burning Algorithm (WF_ABBA) are generated for the Western Hemisphere every 30 minutes with only a 90 minute processing delay. We are currently collaborating with other investigators to gain global coverage. Once generated, the fire products are used to input smoke fluxes into the NRL Aerosol Analysis and Prediction System, where advection forecasts are performed for up to 6 days. Subsequent radiative transfer calculations are used to estimate top of atmosphere and surface radiative forcing as well as surface layer visibility. Near real time validation is performed using field data collected by Aerosol Robotic Network (AERONET) Sun photometers. In this paper we fully describe the FLAMBE project and data availability. Preliminary result from the previous year will also be presented, with an emphasis on the development of algorithms to determine smoke emission fluxes from individual fire products. Comparisons to AERONET Sun photometer data will be made.
NASA Astrophysics Data System (ADS)
Cheng, Guanghui; Yang, Xiaofeng; Wu, Ning; Xu, Zhijian; Zhao, Hongfu; Wang, Yuefeng; Liu, Tian
2013-02-01
Xerostomia (dry mouth), resulting from radiation damage to the parotid glands, is one of the most common and distressing side effects of head-and-neck cancer radiotherapy. Recent MRI studies have demonstrated that the volume reduction of parotid glands is an important indicator for radiation damage and xerostomia. In the clinic, parotid-volume evaluation is exclusively based on physicians' manual contours. However, manual contouring is time-consuming and prone to inter-observer and intra-observer variability. Here, we report a fully automated multi-atlas-based registration method for parotid-gland delineation in 3D head-and-neck MR images. The multi-atlas segmentation utilizes a hybrid deformable image registration to map the target subject to multiple patients' images, applies the transformation to the corresponding segmented parotid glands, and subsequently uses the multiple patient-specific pairs (head-and-neck MR image and transformed parotid-gland mask) to train support vector machine (SVM) to reach consensus to segment the parotid gland of the target subject. This segmentation algorithm was tested with head-and-neck MRIs of 5 patients following radiotherapy for the nasopharyngeal cancer. The average parotid-gland volume overlapped 85% between the automatic segmentations and the physicians' manual contours. In conclusion, we have demonstrated the feasibility of an automatic multi-atlas based segmentation algorithm to segment parotid glands in head-and-neck MR images.
An Incremental High-Utility Mining Algorithm with Transaction Insertion
Gan, Wensheng; Zhang, Binbin
2015-01-01
Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038
Administrative Data Algorithms Can Describe Ambulatory Physician Utilization
Shah, Baiju R; Hux, Janet E; Laupacis, Andreas; Zinman, Bernard; Cauch-Dudek, Karen; Booth, Gillian L
2007-01-01
Objective To validate algorithms using administrative data that characterize ambulatory physician care for patients with a chronic disease. Data Sources Seven-hundred and eighty-one people with diabetes were recruited mostly from community pharmacies to complete a written questionnaire about their physician utilization in 2002. These data were linked with administrative databases detailing health service utilization. Study Design An administrative data algorithm was defined that identified whether or not patients received specialist care, and it was tested for agreement with self-report. Other algorithms, which assigned each patient to a primary care and specialist physician, were tested for concordance with self-reported regular providers of care. Principal Findings The algorithm to identify whether participants received specialist care had 80.4 percent agreement with questionnaire responses (κ = 0.59). Compared with self-report, administrative data had a sensitivity of 68.9 percent and specificity 88.3 percent for identifying specialist care. The best administrative data algorithm to assign each participant's regular primary care and specialist providers was concordant with self-report in 82.6 and 78.2 percent of cases, respectively. Conclusions Administrative data algorithms can accurately match self-reported ambulatory physician utilization. PMID:17610448
NASA Astrophysics Data System (ADS)
Tornga, Shawn R.
The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as localization capability. Utilizing imaging information will show signal-to-noise gains over spectroscopic algorithms alone.
NASA Technical Reports Server (NTRS)
Platnick, Steven; Wind, Galina; Zhang, Zhibo; Ackerman, Steven A.; Maddux, Brent
2012-01-01
The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the l.6, 2.1, and 3.7 m spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "notclear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud'edges as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the ID cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.
Exploiting passive polarimetric imagery for remote sensing applications
NASA Astrophysics Data System (ADS)
Vimal Thilak Krishna, Thilakam
Polarization is a property of light or electromagnetic radiation that conveys information about the orientation of the transverse electric and magnetic fields. The polarization of reflected light complements other electromagnetic radiation attributes such as intensity, frequency, or spectral characteristics. A passive polarization based imaging system records the polarization state of light reflected by objects that are illuminated with an unpolarized and generally uncontrolled source. The polarization due to surface reflections from such objects contains information about the targets that can be exploited in remote sensing applications such as target detection, target classification, object recognition and shape extraction/recognition. In recent years, there has been renewed interest in the use of passive polarization information in remote sensing applications. The goal of our research is to design image processing algorithms for remote sensing applications by utilizing physics-based models that describe the polarization imparted by optical scattering from an object. In this dissertation, we present a method to estimate the complex index of refraction and reflection angle from multiple polarization measurements. This method employs a polarimetric bidirectional reflectance distribution function (pBRDF) that accounts for polarization due to specular scattering. The parameters of interest are derived by utilizing a nonlinear least squares estimation algorithm, and computer simulation results show that the estimation accuracy generally improves with an increasing number of source position measurements. Furthermore, laboratory results indicate that the proposed method is effective for recovering the reflection angle and that the estimated index of refraction provides a feature vector that is robust to the reflection angle. We also study the use of extracted index of refraction as a feature vector in designing two important image processing applications, namely image segmentation and material classification so that the resulting systems are largely invariant to illumination source location. This is in contrast to most passive polarization-based image processing algorithms proposed in the literature that employ quantities such as Stokes vectors and the degree of polarization and which are not robust to changes in illumination conditions. The estimated index of refraction, on the other hand, is invariant to illumination conditions and hence can be used as an input to image processing algorithms. The proposed estimation framework also is extended to the case where the position of the observer (camera) moves between measurements while that of the source remains fixed. Finally, we explore briefly the topic of parameter estimation for a generalized model that accounts for both specular and volumetric scattering. A combination of simulation and experimental results are provided to evaluate the effectiveness of the above methods.
Dose reduction potential of iterative reconstruction algorithms in neck CTA-a simulation study.
Ellmann, Stephan; Kammerer, Ferdinand; Allmendinger, Thomas; Brand, Michael; Janka, Rolf; Hammon, Matthias; Lell, Michael M; Uder, Michael; Kramer, Manuel
2016-10-01
This study aimed to determine the degree of radiation dose reduction in neck CT angiography (CTA) achievable with Sinogram-affirmed iterative reconstruction (SAFIRE) algorithms. 10 consecutive patients scheduled for neck CTA were included in this study. CTA images of the external carotid arteries either were reconstructed with filtered back projection (FBP) at full radiation dose level or underwent simulated dose reduction by proprietary reconstruction software. The dose-reduced images were reconstructed using either SAFIRE 3 or SAFIRE 5 and compared with full-dose FBP images in terms of vessel definition. 5 observers performed a total of 3000 pairwise comparisons. SAFIRE allowed substantial radiation dose reductions in neck CTA while maintaining vessel definition. The possible levels of radiation dose reduction ranged from approximately 34 to approximately 90% and depended on the SAFIRE algorithm strength and the size of the vessel of interest. In general, larger vessels permitted higher degrees of radiation dose reduction, especially with higher SAFIRE strength levels. With small vessels, the superiority of SAFIRE 5 over SAFIRE 3 was lost. Neck CTA can be performed with substantially less radiation dose when SAFIRE is applied. The exact degree of radiation dose reduction should be adapted to the clinical question, in particular to the smallest vessel needing excellent definition.
Whitney, Gina; Daves, Suanne; Hughes, Alex; Watkins, Scott; Woods, Marcella; Kreger, Michael; Marincola, Paula; Chocron, Isaac; Donahue, Brian
2013-07-01
The goal of this project is to measure the impact of standardization of transfusion practice on blood product utilization and postoperative bleeding in pediatric cardiac surgery patients. Transfusion is common following cardiopulmonary bypass (CPB) in children and is associated with increased mortality, infection, and duration of mechanical ventilation. Transfusion in pediatric cardiac surgery is often based on clinical judgment rather than objective data. Although objective transfusion algorithms have demonstrated efficacy for reducing transfusion in adult cardiac surgery, such algorithms have not been applied in the pediatric setting. This quality improvement effort was designed to reduce blood product utilization in pediatric cardiac surgery using a blood product transfusion algorithm. We implemented an evidence-based transfusion protocol in January 2011 and monitored the impact of this algorithm on blood product utilization, chest tube output during the first 12 h of intensive care unit (ICU) admission, and predischarge mortality. When compared with the 12 months preceding implementation, blood utilization per case in the operating room odds ratio (OR) for the 11 months following implementation decreased by 66% for red cells (P = 0.001) and 86% for cryoprecipitate (P < 0.001). Blood utilization during the first 12 h of ICU did not increase during this time and actually decreased 56% for plasma (P = 0.006) and 41% for red cells (P = 0.031), indicating that the decrease in OR transfusion did not shift the transfusion burden to the ICU. Postoperative bleeding, as measured by chest tube output in the first 12 ICU hours, did not increase following implementation of the algorithm. Monthly surgical volume did not change significantly following implementation of the algorithm (P = 0.477). In a logistic regression model for predischarge mortality among the nontransplant patients, after accounting for surgical severity and duration of CPB, use of the transfusion algorithm was associated with a 0.247 relative risk of mortality (P = 0.013). These results indicate that introduction of an objective transfusion algorithm in pediatric cardiac surgery significantly reduces perioperative blood product utilization and mortality, without increasing postoperative chest tube losses. © 2013 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Qayyum, Sajid; Alsaedi, Ahmed; Ahmad, Bashir
2018-05-01
Main objective of present analysis is to study the magnetohydrodynamic (MHD) nonlinear convective flow of thixotropic nanofluid. Flow is due to nonlinear stretching surface with variable thickness. Nonlinear thermal radiation and heat generation/absorption are utilized in the energy expression. Convective conditions and zero mass flux at sheet are considered. Intention in present analysis is to develop a model for nanomaterial comprising Brownian motion and thermophoresis phenomena. Appropriate transformations are implemented for the conversion of partial differential systems into a sets of ordinary differential equations. The transformed expressions have been scrutinized through homotopic algorithm. Behavior of various sundry variables on velocity, temperature, nanoparticle concentration, skin friction coefficient and local Nusselt number are displayed through graphs. It is concluded that qualitative behaviors of temperature and thermal layer thickness are similar for radiation and temperature ratio variables. Moreover an enhancement in heat generation/absorption show rise to thermal field.
Feasibility study on low-dosage digital tomosynthesis (DTS) using a multislit collimation technique
NASA Astrophysics Data System (ADS)
Park, S. Y.; Kim, G. A.; Park, C. K.; Cho, H. S.; Seo, C. W.; Lee, D. Y.; Kang, S. Y.; Kim, K. S.; Lim, H. W.; Lee, H. W.; Park, J. E.; Kim, W. S.; Jeon, D. H.; Woo, T. H.
2018-04-01
In this study, we investigated an effective low-dose digital tomosynthesis (DTS) where a multislit collimator placed between the X-ray tube and the patient oscillates during projection data acquisition, partially blocking the X-ray beam to the patient thereby reducing the radiation dosage. We performed a simulation using the proposed DTS with two sets of multislit collimators both having a 50% duty cycle and investigated the image characteristics to demonstrate the feasibility of this proposed approach. In the simulation, all projections were taken at a tomographic angle of θ = ± 50° and an angle step of Δθ =2°. We utilized an iterative algorithm based on a compressed-sensing (CS) scheme for more accurate DTS reconstruction. Using the proposed DTS, we successfully obtained CS-reconstructed DTS images with no bright-band artifacts around the multislit edges of the collimator, thus maintaining the image quality. Therefore, the use of multislit collimation in current real-world DTS systems can reduce the radiation dosage to patients.
Infrared Algorithm Development for Ocean Observations with EOS/MODIS
NASA Technical Reports Server (NTRS)
Brown, Otis B.
1997-01-01
Efforts continue under this contract to develop algorithms for the computation of sea surface temperature (SST) from MODIS infrared measurements. This effort includes radiative transfer modeling, comparison of in situ and satellite observations, development and evaluation of processing and networking methodologies for algorithm computation and data accession, evaluation of surface validation approaches for IR radiances, development of experimental instrumentation, and participation in MODIS (project) related activities. Activities in this contract period have focused on radiative transfer modeling, evaluation of atmospheric correction methodologies, undertake field campaigns, analysis of field data, and participation in MODIS meetings.
In-Space Radiator Shape Optimization using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Hull, Patrick V.; Kittredge, Ken; Tinker, Michael; SanSoucie, Michael
2006-01-01
Future space exploration missions will require the development of more advanced in-space radiators. These radiators should be highly efficient and lightweight, deployable heat rejection systems. Typical radiators for in-space heat mitigation commonly comprise a substantial portion of the total vehicle mass. A small mass savings of even 5-10% can greatly improve vehicle performance. The objective of this paper is to present the development of detailed tools for the analysis and design of in-space radiators using evolutionary computation techniques. The optimality criterion is defined as a two-dimensional radiator with a shape demonstrating the smallest mass for the greatest overall heat transfer, thus the end result is a set of highly functional radiator designs. This cross-disciplinary work combines topology optimization and thermal analysis design by means of a genetic algorithm The proposed design tool consists of the following steps; design parameterization based on the exterior boundary of the radiator, objective function definition (mass minimization and heat loss maximization), objective function evaluation via finite element analysis (thermal radiation analysis) and optimization based on evolutionary algorithms. The radiator design problem is defined as follows: the input force is a driving temperature and the output reaction is heat loss. Appropriate modeling of the space environment is added to capture its effect on the radiator. The design parameters chosen for this radiator shape optimization problem fall into two classes, variable height along the width of the radiator and a spline curve defining the -material boundary of the radiator. The implementation of multiple design parameter schemes allows the user to have more confidence in the radiator optimization tool upon demonstration of convergence between the two design parameter schemes. This tool easily allows the user to manipulate the driving temperature regions thus permitting detailed design of in-space radiators for unique situations. Preliminary results indicate an optimized shape following that of the temperature distribution regions in the "cooler" portions of the radiator. The results closely follow the expected radiator shape.
2014-01-01
Background Single-pass, contrast-enhanced whole body multidetector computed tomography (MDCT) emerged as the diagnostic standard for evaluating patients with major trauma. Modern iterative image algorithms showed high image quality at a much lower radiation dose in the non-trauma setting. This study aims at investigating whether the radiation dose can safely be reduced in trauma patients without compromising the diagnostic accuracy and image quality. Methods/Design Prospective observational study with two consecutive cohorts of patients. Setting: A high-volume, academic, supra-regional trauma centre in Germany. Study population: Consecutive male and female patients who 1. had been exposed to a high-velocity trauma mechanism, 2. present with clinical evidence or high suspicion of multiple trauma (predicted Injury Severity Score [ISS] ≥16) and 3. are scheduled for primary MDCT based on the decision of the trauma leader on call. Imaging protocols: In a before/after design, a consecutive series of 500 patients will undergo single-pass, whole-body 128-row multi-detector computed tomography (MDCT) with a standard, as low as possible radiation dose. This will be followed by a consecutive series of 500 patients undergoing an approved ultra-low dose MDCT protocol using an image processing algorithm. Data: Routine administrative data and electronic patient records, as well as digital images stored in a picture archiving and communications system will serve as the primary data source. The protocol was approved by the institutional review board. Main outcomes: (1) incidence of delayed diagnoses, (2) diagnostic accuracy, as correlated to the reference standard of a synopsis of all subsequent clinical, imaging, surgical and autopsy findings, (3) patients’ safety, (4) radiation exposure (e.g. effective dose), (5) subjective image quality (assessed independently radiologists and trauma surgeons on a 100-mm visual analogue scale), (6) objective image quality (e.g., contrast-to-noise ratio). Analysis: Multivariate regression will be employed to adjust and correct the findings for time and cohort effects. An exploratory interim analysis halfway after introduction of low-dose MDCT will be conducted to assess whether this protocol is clearly inferior or superior to the current standard. Discussion Although non-experimental, this study will generate first large-scale data on the utility of imaging-enhancing algorithms in whole-body MDCT for major blunt trauma. Trial registration Current Controlled Trials ISRCTN74557102. PMID:24589310
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.
2004-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).
Deriving health utilities from the MacNew Heart Disease Quality of Life Questionnaire.
Chen, Gang; McKie, John; Khan, Munir A; Richardson, Jeff R
2015-10-01
Quality of life is included in the economic evaluation of health services by measuring the preference for health states, i.e. health state utilities. However, most intervention studies include a disease-specific, not a utility, instrument. Consequently, there has been increasing use of statistical mapping algorithms which permit utilities to be estimated from a disease-specific instrument. The present paper provides such algorithms between the MacNew Heart Disease Quality of Life Questionnaire (MacNew) instrument and six multi-attribute utility (MAU) instruments, the Euroqol (EQ-5D), the Short Form 6D (SF-6D), the Health Utilities Index (HUI) 3, the Quality of Wellbeing (QWB), the 15D (15 Dimension) and the Assessment of Quality of Life (AQoL-8D). Heart disease patients and members of the healthy public were recruited from six countries. Non-parametric rank tests were used to compare subgroup utilities and MacNew scores. Mapping algorithms were estimated using three separate statistical techniques. Mapping algorithms achieved a high degree of precision. Based on the mean absolute error and the intra class correlation the preferred mapping is MacNew into SF-6D or 15D. Using the R squared statistic the preferred mapping is MacNew into AQoL-8D. The algorithms reported in this paper enable MacNew data to be mapped into utilities predicted from any of six instruments. This permits studies which have included the MacNew to be used in cost utility analyses which, in turn, allows the comparison of services with interventions across the health system. © The European Society of Cardiology 2014.
Skin dose for head and neck cancer patients treated with intensity-modulated radiation therapy(IMRT)
NASA Astrophysics Data System (ADS)
Fu, Hsiao-Ju; Li, Chi-Wei; Tsai, Wei-Ta; Chang, Chih-Chia; Tsang, Yuk-Wah
2017-11-01
The reliability of thermoluminescent dosimeters (ultrathin TLD) and ISP Gafchromic EBT2 film to measure the surface dose in phantom and the skin dose in head-and-neck patients treated with intensity-modulated radiation therapy technique(IMRT) is the research focus. Seven-field treatment plans with prescribed dose of 180 cGy were performed on Eclipse treatment planning system which utilized pencil beam calculation algorithm(PBC). In calibration tests, the variance coefficient of the ultrathin TLDs were within 3%. The points on the calibration curve of the Gafchromic film was within 1% variation. Five measurements were taken on phantom using ultrathin TLD and EBT2 film respectively. The measured mean surface doses between ultrathin TLD or EBT2 film were within 5% deviation. Skin doses of 6 patients were measured for initial 5 fractions and the mean dose per-fraction was calculated. If the extrapolated doses for 30 fractions were below 4000 cGy, the skin reaction grading observed according to Radiation Therapy Oncology Group (RTOG) was either grade 1 or grade 2. If surface dose exceeded 5000 cGy in 32 fractions, then grade 3 skin reactions were observed.
Olive Crown Porosity Measurement Based on Radiation Transmittance: An Assessment of Pruning Effect.
Castillo-Ruiz, Francisco J; Castro-Garcia, Sergio; Blanco-Roldan, Gregorio L; Sola-Guirado, Rafael R; Gil-Ribes, Jesus A
2016-05-19
Crown porosity influences radiation interception, air movement through the fruit orchard, spray penetration, and harvesting operation in fruit crops. The aim of the present study was to develop an accurate and reliable methodology based on transmitted radiation measurements to assess the porosity of traditional olive trees under different pruning treatments. Transmitted radiation was employed as an indirect method to measure crown porosity in two olive orchards of the Picual and Hojiblanca cultivars. Additionally, three different pruning treatments were considered to determine if the pruning system influences crown porosity. This study evaluated the accuracy and repeatability of four algorithms in measuring crown porosity under different solar zenith angles. From a 14° to 30° solar zenith angle, the selected algorithm produced an absolute error of less than 5% and a repeatability higher than 0.9. The described method and selected algorithm proved satisfactory in field results, making it possible to measure crown porosity at different solar zenith angles. However, pruning fresh weight did not show any relationship with crown porosity due to the great differences between removed branches. A robust and accurate algorithm was selected for crown porosity measurements in traditional olive trees, making it possible to discern between different pruning treatments.
GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation
Jiang, Bo; Liang, Shunlin; Ma, Han; ...
2016-03-09
Mapping surface all-wave net radiation (R n) is critically needed for various applications. Several existing R n products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime R n product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS R n product based on high-quality in situ measurementsmore » in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm -2, and an average bias of 17.59 Wm -2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS R n product is satisfactory. The GLASS R n product from 2000 to the present is operational and freely available to the public.« less
GLASS daytime all-wave net radiation product: Algorithm development and preliminary validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Bo; Liang, Shunlin; Ma, Han
Mapping surface all-wave net radiation (R n) is critically needed for various applications. Several existing R n products from numerical models and satellite observations have coarse spatial resolutions and their accuracies may not meet the requirements of land applications. In this study, we develop the Global LAnd Surface Satellite (GLASS) daytime R n product at a 5 km spatial resolution. Its algorithm for converting shortwave radiation to all-wave net radiation using the Multivariate Adaptive Regression Splines (MARS) model is determined after comparison with three other algorithms. The validation of the GLASS R n product based on high-quality in situ measurementsmore » in the United States shows a coefficient of determination value of 0.879, an average root mean square error value of 31.61 Wm -2, and an average bias of 17.59 Wm -2. Furthermore, we also compare our product/algorithm with another satellite product (CERES-SYN) and two reanalysis products (MERRA and JRA55), and find that the accuracy of the much higher spatial resolution GLASS R n product is satisfactory. The GLASS R n product from 2000 to the present is operational and freely available to the public.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norris, H; Rangaraj, D; Kim, S
Purpose: High-Z (metal) implants in CT scans cause significant streak-like artifacts in the reconstructed dataset. This results in both inaccurate CT Hounsfield units for the tissue as well as obscuration of the target and organs at risk (OARs) for radiation therapy planning. Herein we analyze two metal artifact reduction algorithms: GE’s Smart MAR and a Metal Deletion Technique (MDT) for geometric and Hounsfield Unit (HU) accuracy. Methods: A CT-to-electron density phantom, with multiple inserts of various densities and a custom Cerrobend insert (Zeff=76.8), is utilized in this continuing study. The phantom is scanned without metal (baseline) and again with themore » metal insert. Using one set of projection data, reconstructed CT volumes are created with filtered-back-projection (FBP) and the MAR and the MDT algorithms. Regions-of-Interest (ROIs) are evaluated for each insert for HU accuracy; the metal insert’s Full-Width-Half-Maximum (FWHM) is used to evaluate the geometric accuracy. Streak severity is quantified with an HU error metric over the phantom volume. Results: The original FBP reconstruction has a Root-Mean-Square-Error (RMSE) of 57.55 HU (STD=29.19, range=−145.8 to +79.2) compared to baseline. The MAR reconstruction has a RMSE of 20.98 HU (STD=13.92, range=−18.3 to +61.7). The MDT reconstruction has a RMSE of 10.05 HU (STD=10.5, range=−14.8 to +18.6). FWHM for baseline=162.05; FBP=161.84 (−0.13%); MAR=162.36 (+0.19%); MDT=162.99 (+0.58%). Streak severity metric for FBP=19.73 (22.659% bad pixels); MAR=8.743 (9.538% bad); MDT=4.899 (5.303% bad). Conclusion: Image quality, in terms of HU accuracy, in the presence of high-Z metal objects in CT scans is improved by metal artifact reduction reconstruction algorithms. The MDT algorithm had the highest HU value accuracy (RMSE=10.05 HU) and best streak severity metric, but scored the worst in terms of geometric accuracy. Qualitatively, the MAR and MDT algorithms increased detectability of inserts, although there is a loss of in-plane resolution near the metallic insert.« less
Efficient hiding of confidential high-utility itemsets with minimal side effects
NASA Astrophysics Data System (ADS)
Lin, Jerry Chun-Wei; Hong, Tzung-Pei; Fournier-Viger, Philippe; Liu, Qiankun; Wong, Jia-Wei; Zhan, Justin
2017-11-01
Privacy preserving data mining (PPDM) is an emerging research problem that has become critical in the last decades. PPDM consists of hiding sensitive information to ensure that it cannot be discovered by data mining algorithms. Several PPDM algorithms have been developed. Most of them are designed for hiding sensitive frequent itemsets or association rules. Hiding sensitive information in a database can have several side effects such as hiding other non-sensitive information and introducing redundant information. Finding the set of itemsets or transactions to be sanitised that minimises side effects is an NP-hard problem. In this paper, a genetic algorithm (GA) using transaction deletion is designed to hide sensitive high-utility itemsets for PPUM. A flexible fitness function with three adjustable weights is used to evaluate the goodness of each chromosome for hiding sensitive high-utility itemsets. To speed up the evolution process, the pre-large concept is adopted in the designed algorithm. It reduces the number of database scans required for verifying the goodness of an evaluated chromosome. Substantial experiments are conducted to compare the performance of the designed GA approach (with/without the pre-large concept), with a GA-based approach relying on transaction insertion and a non-evolutionary algorithm, in terms of execution time, side effects, database integrity and utility integrity. Results demonstrate that the proposed algorithm hides sensitive high-utility itemsets with fewer side effects than previous studies, while preserving high database and utility integrity.
NASA Astrophysics Data System (ADS)
Matthews, Q.; Jirasek, A.; Lum, J. J.; Brolo, A. G.
2011-11-01
This work applies noninvasive single-cell Raman spectroscopy (RS) and principal component analysis (PCA) to analyze and correlate radiation-induced biochemical changes in a panel of human tumour cell lines that vary by tissue of origin, p53 status and intrinsic radiosensitivity. Six human tumour cell lines, derived from prostate (DU145, PC3 and LNCaP), breast (MDA-MB-231 and MCF7) and lung (H460), were irradiated in vitro with single fractions (15, 30 or 50 Gy) of 6 MV photons. Remaining live cells were harvested for RS analysis at 0, 24, 48 and 72 h post-irradiation, along with unirradiated controls. Single-cell Raman spectra were acquired from 20 cells per sample utilizing a 785 nm excitation laser. All spectra (200 per cell line) were individually post-processed using established methods and the total data set for each cell line was analyzed with PCA using standard algorithms. One radiation-induced PCA component was detected for each cell line by identification of statistically significant changes in the PCA score distributions for irradiated samples, as compared to unirradiated samples, in the first 24-72 h post-irradiation. These RS response signatures arise from radiation-induced changes in cellular concentrations of aromatic amino acids, conformational protein structures and certain nucleic acid and lipid functional groups. Correlation analysis between the radiation-induced PCA components separates the cell lines into three distinct RS response categories: R1 (H460 and MCF7), R2 (MDA-MB-231 and PC3) and R3 (DU145 and LNCaP). These RS categories partially segregate according to radiosensitivity, as the R1 and R2 cell lines are radioresistant (SF2 > 0.6) and the R3 cell lines are radiosensitive (SF2 < 0.5). The R1 and R2 cell lines further segregate according to p53 gene status, corroborated by cell cycle analysis post-irradiation. Potential radiation-induced biochemical response mechanisms underlying our RS observations are proposed, such as (1) the regulated synthesis and degradation of structured proteins and (2) the expression of anti-apoptosis factors or other survival signals. This study demonstrates the utility of RS for noninvasive radiobiological analysis of tumour cell radiation response, and indicates the potential for future RS studies designed to investigate, monitor or predict radiation response.
Turbulent Radiation Effects in HSCT Combustor Rich Zone
NASA Technical Reports Server (NTRS)
Hall, Robert J.; Vranos, Alexander; Yu, Weiduo
1998-01-01
A joint UTRC-University of Connecticut theoretical program was based on describing coupled soot formation and radiation in turbulent flows using stretched flamelet theory. This effort was involved with using the model jet fuel kinetics mechanism to predict soot growth in flamelets at elevated pressure, to incorporate an efficient model for turbulent thermal radiation into a discrete transfer radiation code, and to couple die soot growth, flowfield, and radiation algorithm. The soot calculations used a recently developed opposed jet code which couples the dynamical equations of size-class dependent particle growth with complex chemistry. Several of the tasks represent technical firsts; among these are the prediction of soot from a detailed jet fuel kinetics mechanism, the inclusion of pressure effects in the soot particle growth equations, and the inclusion of the efficient turbulent radiation algorithm in a combustor code.
Heavy quark radiation in NLO+PS POWHEG generators
NASA Astrophysics Data System (ADS)
Buonocore, Luca; Nason, Paolo; Tramontano, Francesco
2018-02-01
In this paper we deal with radiation from heavy quarks in the context of next-to-leading order calculations matched to parton shower generators. A new algorithm for radiation from massive quarks is presented that has considerable advantages over the one previously employed. We implement the algorithm in the framework of the POWHEG-BOX, and compare it with the previous one in the case of the hvq generator for bottom production in hadronic collisions, and in the case of the bb4l generator for top production and decay.
HECLIB. Volume 2: HECDSS Subroutines Programmer’s Manual
1991-05-01
algorithm and hierarchical design for database accesses. This algorithm provides quick access to data sets and an efficient means of adding new data set...Description of How DSS Works DSS version 6 utilizes a modified hash algorithm based upon the pathname to store and retrieve data. This structure allows...balancing disk space and record access times. A variation in this algorithm is for "stable" files. In a stable file, a hash table is not utilized
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Assessment of the Broadleaf Crops Leaf Area Index Product from the Terra MODIS Instrument
NASA Technical Reports Server (NTRS)
Tan, Bin; Hu, Jiannan; Huang, Dong; Yang, Wenze; Zhang, Ping; Shabanov, Nikolay V.; Knyazikhin, Yuri; Nemani, Ramakrishna R.; Myneni, Ranga B.
2005-01-01
The first significant processing of Terra MODIS data, called Collection 3, covered the period from November 2000 to December 2002. The Collection 3 leaf area index (LAI) and fraction vegetation absorbed photosynthetically active radiation (FPAR) products for broadleaf crops exhibited three anomalies (a) high LAI values during the peak growing season, (b) differences in LAI seasonality between the radiative transfer-based main algorithm and the vegetation index based back-up algorithm, and (c) too few retrievals from the main algorithm during the summer period when the crops are at full flush. The cause of these anomalies is a mismatch between reflectances modeled by the algorithm and MODIS measurements. Therefore, the Look-Up-Tables accompanying the algorithm were revised and implemented in Collection 4 processing. The main algorithm with the revised Look-Up-Tables generated retrievals for over 80% of the pixels with valid data. Retrievals from the back-up algorithm, although few, should be used with caution as they are generated from surface reflectances with high uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
Xie, Yu; Sengupta, Manajit; Dooraghi, Mike
2018-03-20
Development of accurate transposition models to simulate plane-of-array (POA) irradiance from horizontal measurements or simulations is a complex process mainly because of the anisotropic distribution of diffuse solar radiation in the atmosphere. The limited availability of reliable POA measurements at large temporal and spatial scales leads to difficulties in the comprehensive evaluation of transposition models. This paper proposes new algorithms to assess the uncertainty of transposition models using both surface-based observations and modeling tools. We reviewed the analytical derivation of POA irradiance and the approximation of isotropic diffuse radiation that simplifies the computation. Two transposition models are evaluated against themore » computation by the rigorous analytical solution. We proposed a new algorithm to evaluate transposition models using the clear-sky measurements at the National Renewable Energy Laboratory's (NREL's) Solar Radiation Research Laboratory (SRRL) and a radiative transfer model that integrates diffuse radiances of various sky-viewing angles. We found that the radiative transfer model and a transposition model based on empirical regressions are superior to the isotropic models when compared to measurements. We further compared the radiative transfer model to the transposition models under an extensive range of idealized conditions. Our results suggest that the empirical transposition model has slightly higher cloudy-sky POA irradiance than the radiative transfer model, but performs better than the isotropic models under clear-sky conditions. Significantly smaller POA irradiances computed by the transposition models are observed when the photovoltaics (PV) panel deviates from the azimuthal direction of the sun. The new algorithms developed in the current study have opened the door to a more comprehensive evaluation of transposition models for various atmospheric conditions and solar and PV orientations.« less
The TROPOMI surface UV algorithm
NASA Astrophysics Data System (ADS)
Lindfors, Anders V.; Kujanpää, Jukka; Kalakoski, Niilo; Heikkilä, Anu; Lakkala, Kaisa; Mielonen, Tero; Sneep, Maarten; Krotkov, Nickolay A.; Arola, Antti; Tamminen, Johanna
2018-02-01
The TROPOspheric Monitoring Instrument (TROPOMI) is the only payload of the Sentinel-5 Precursor (S5P), which is a polar-orbiting satellite mission of the European Space Agency (ESA). TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI) and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF) algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i) daily dose or daily accumulated irradiance, (ii) overpass dose rate or irradiance, and (iii) local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2) satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.
Multidimensional, fully implicit, exactly conserving electromagnetic particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Chacon, Luis
2015-09-01
We discuss a new, conservative, fully implicit 2D-3V particle-in-cell algorithm for non-radiative, electromagnetic kinetic plasma simulations, based on the Vlasov-Darwin model. Unlike earlier linearly implicit PIC schemes and standard explicit PIC schemes, fully implicit PIC algorithms are unconditionally stable and allow exact discrete energy and charge conservation. This has been demonstrated in 1D electrostatic and electromagnetic contexts. In this study, we build on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the Darwin field and particle orbit equations for multiple species in multiple dimensions. The Vlasov-Darwin model is very attractive for PIC simulations because it avoids radiative noise issues in non-radiative electromagnetic regimes. The algorithm conserves global energy, local charge, and particle canonical-momentum exactly, even with grid packing. The nonlinear iteration is effectively accelerated with a fluid preconditioner, which allows efficient use of large timesteps, O(√{mi/me}c/veT) larger than the explicit CFL. In this presentation, we will introduce the main algorithmic components of the approach, and demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 1D and 2D. Support from the LANL LDRD program and the DOE-SC ASCR office.
A comprehensive formulation for volumetric modulated arc therapy planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Dan; Lyu, Qihui; Ruan, Dan
2016-07-15
Purpose: Volumetric modulated arc therapy (VMAT) is a widely employed radiation therapy technique, showing comparable dosimetry to static beam intensity modulated radiation therapy (IMRT) with reduced monitor units and treatment time. However, the current VMAT optimization has various greedy heuristics employed for an empirical solution, which jeopardizes plan consistency and quality. The authors introduce a novel direct aperture optimization method for VMAT to overcome these limitations. Methods: The comprehensive VMAT (comVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term to penalize the difference between the optimized dose and the prescribed dose, as well as an anisotropicmore » total variation term to promote piecewise continuity in the fluence maps, preparing it for direct aperture optimization. A level set function was used to describe the aperture shapes and the difference between aperture shapes at adjacent angles was penalized to control MLC motion range. A proximal-class optimization solver was adopted to solve the large scale optimization problem, and an alternating optimization strategy was implemented to solve the fluence intensity and aperture shapes simultaneously. Single arc comVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme case, a lung (LNG) case, and two head and neck cases—one with three PTVs (H&N{sub 3PTV}) and one with foue PTVs (H&N{sub 4PTV})—to test the efficacy. The plans were optimized using an alternating optimization strategy. The plans were compared against the clinical VMAT (clnVMAT) plans utilizing two overlapping coplanar arcs for treatment. Results: The optimization of the comVMAT plans had converged within 600 iterations of the block minimization algorithm. comVMAT plans were able to consistently reduce the dose to all organs-at-risk (OARs) as compared to the clnVMAT plans. On average, comVMAT plans reduced the max and mean OAR dose by 6.59% and 7.45%, respectively, of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N{sub 3PTV} case. PTV coverages measured by D95, D98, and D99 were within 0.25% of the prescription dose. By comprehensively optimizing all beams, the comVMAT optimizer gained the freedom to allow some selected beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel nongreedy VMAT approach simultaneously optimizes all beams in an arc and then directly generates deliverable apertures. The single arc VMAT approach thus fully utilizes the digital Linac’s capability in dose rate and gantry rotation speed modulation. In practice, the new single VMAT algorithm generates plans superior to existing VMAT algorithms utilizing two arcs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Shiyang; Song, Peng; Pei, Wenbing
2013-09-15
Based on the conjugate gradient method, a simple algorithm is presented for deconvolving the temporal response of photoelectric x-ray detectors (XRDs) to reconstruct the resolved time-dependent x-ray fluxes. With this algorithm, we have studied the impact of temporal response of XRD on the radiation diagnosis of hohlraum heated by a short intense laser pulse. It is found that the limiting temporal response of XRD not only postpones the rising edge and peak position of x-ray pulses but also smoothes the possible fluctuations of radiation fluxes. Without a proper consideration of the temporal response of XRD, the measured radiation flux canmore » be largely misinterpreted for radiation pulses of a hohlraum heated by short or shaped laser pulses.« less
An improved non-uniformity correction algorithm and its hardware implementation on FPGA
NASA Astrophysics Data System (ADS)
Rong, Shenghui; Zhou, Huixin; Wen, Zhigang; Qin, Hanlin; Qian, Kun; Cheng, Kuanhong
2017-09-01
The Non-uniformity of Infrared Focal Plane Arrays (IRFPA) severely degrades the infrared image quality. An effective non-uniformity correction (NUC) algorithm is necessary for an IRFPA imaging and application system. However traditional scene-based NUC algorithm suffers the image blurring and artificial ghosting. In addition, few effective hardware platforms have been proposed to implement corresponding NUC algorithms. Thus, this paper proposed an improved neural-network based NUC algorithm by the guided image filter and the projection-based motion detection algorithm. First, the guided image filter is utilized to achieve the accurate desired image to decrease the artificial ghosting. Then a projection-based moving detection algorithm is utilized to determine whether the correction coefficients should be updated or not. In this way the problem of image blurring can be overcome. At last, an FPGA-based hardware design is introduced to realize the proposed NUC algorithm. A real and a simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. Experimental results indicated that the proposed NUC algorithm can effectively eliminate the fix pattern noise with less image blurring and artificial ghosting. The proposed hardware design takes less logic elements in FPGA and spends less clock cycles to process one frame of image.
Torshabi, Ahmad Esmaili; Nankali, Saber
2016-01-01
In external beam radiotherapy, one of the most common and reliable methods for patient geometrical setup and/or predicting the tumor location is use of external markers. In this study, the main challenging issue is increasing the accuracy of patient setup by investigating external markers location. Since the location of each external marker may yield different patient setup accuracy, it is important to assess different locations of external markers using appropriate selective algorithms. To do this, two commercially available algorithms entitled a) canonical correlation analysis (CCA) and b) principal component analysis (PCA) were proposed as input selection algorithms. They work on the basis of maximum correlation coefficient and minimum variance between given datasets. The proposed input selection algorithms work in combination with an adaptive neuro‐fuzzy inference system (ANFIS) as a correlation model to give patient positioning information as output. Our proposed algorithms provide input file of ANFIS correlation model accurately. The required dataset for this study was prepared by means of a NURBS‐based 4D XCAT anthropomorphic phantom that can model the shape and structure of complex organs in human body along with motion information of dynamic organs. Moreover, a database of four real patients undergoing radiation therapy for lung cancers was utilized in this study for validation of proposed strategy. Final analyzed results demonstrate that input selection algorithms can reasonably select specific external markers from those areas of the thorax region where root mean square error (RMSE) of ANFIS model has minimum values at that given area. It is also found that the selected marker locations lie closely in those areas where surface point motion has a large amplitude and a high correlation. PACS number(s): 87.55.km, 87.55.N PMID:27929479
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.; Pan, Xiaochuan
2010-01-01
Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack–Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories. PMID:20175463
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT.
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A; Pan, Xiaochuan
2010-01-01
Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. The proposed approach comprises of two reconstruction steps. In the first step, a chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredback-projection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.
A voting-based star identification algorithm utilizing local and global distribution
NASA Astrophysics Data System (ADS)
Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua
2018-03-01
A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.
Algorithm-Based Fault Tolerance Integrated with Replication
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2008-01-01
In a proposed approach to programming and utilization of commercial off-the-shelf computing equipment, a combination of algorithm-based fault tolerance (ABFT) and replication would be utilized to obtain high degrees of fault tolerance without incurring excessive costs. The basic idea of the proposed approach is to integrate ABFT with replication such that the algorithmic portions of computations would be protected by ABFT, and the logical portions by replication. ABFT is an extremely efficient, inexpensive, high-coverage technique for detecting and mitigating faults in computer systems used for algorithmic computations, but does not protect against errors in logical operations surrounding algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rishel, Jeremy P.; Keillor, Martin E.; Arrigo, Leah M.
2016-01-01
Atmospheric dispersion theory can be used to predict ground deposition of particulates downwind of a radionuclide release. This paper utilizes standard formulations found in Gaussian plume models to inform the design of an experimental release of short-lived radioactive particles into the atmosphere. Specifically, a source depletion algorithm is used to determine the optimum particle size and release height that maximizes the near-field deposition while minimizing the both the required source activity and the fraction of activity lost to long-distance transport. The purpose of the release is to provide a realistic deposition pattern that might be observed downwind of a small-scalemore » vent from an underground nuclear explosion. The deposition field will be used, in part, to investigate several techniques of gamma radiation survey and spectrometry that could be utilized by an On-Site Inspection team under the verification regime of the Comprehensive Nuclear-Test-Ban Treaty.« less
Lee, Ki Baek
2018-01-01
Objective To describe the quantitative image quality and histogram-based evaluation of an iterative reconstruction (IR) algorithm in chest computed tomography (CT) scans at low-to-ultralow CT radiation dose levels. Materials and Methods In an adult anthropomorphic phantom, chest CT scans were performed with 128-section dual-source CT at 70, 80, 100, 120, and 140 kVp, and the reference (3.4 mGy in volume CT Dose Index [CTDIvol]), 30%-, 60%-, and 90%-reduced radiation dose levels (2.4, 1.4, and 0.3 mGy). The CT images were reconstructed by using filtered back projection (FBP) algorithms and IR algorithm with strengths 1, 3, and 5. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were statistically compared between different dose levels, tube voltages, and reconstruction algorithms. Moreover, histograms of subtraction images before and after standardization in x- and y-axes were visually compared. Results Compared with FBP images, IR images with strengths 1, 3, and 5 demonstrated image noise reduction up to 49.1%, SNR increase up to 100.7%, and CNR increase up to 67.3%. Noteworthy image quality degradations on IR images including a 184.9% increase in image noise, 63.0% decrease in SNR, and 51.3% decrease in CNR, and were shown between 60% and 90% reduced levels of radiation dose (p < 0.0001). Subtraction histograms between FBP and IR images showed progressively increased dispersion with increased IR strength and increased dose reduction. After standardization, the histograms appeared deviated and ragged between FBP images and IR images with strength 3 or 5, but almost normally-distributed between FBP images and IR images with strength 1. Conclusion The IR algorithm may be used to save radiation doses without substantial image quality degradation in chest CT scanning of the adult anthropomorphic phantom, down to approximately 1.4 mGy in CTDIvol (60% reduced dose). PMID:29354008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran, C; Kamal, H
Purpose: To provide a multicriteria optimization algorithm for intensity modulated radiation therapy using pencil proton beam scanning. Methods: Intensity modulated radiation therapy using pencil proton beam scanning requires efficient optimization algorithms to overcome the uncertainties in the Bragg peaks locations. This work is focused on optimization algorithms that are based on Monte Carlo simulation of the treatment planning and use the weights and the dose volume histogram (DVH) control points to steer toward desired plans. The proton beam treatment planning process based on single objective optimization (representing a weighted sum of multiple objectives) usually leads to time-consuming iterations involving treatmentmore » planning team members. We proved a time efficient multicriteria optimization algorithm that is developed to run on NVIDIA GPU (Graphical Processing Units) cluster. The multicriteria optimization algorithm running time benefits from up-sampling of the CT voxel size of the calculations without loss of fidelity. Results: We will present preliminary results of Multicriteria optimization for intensity modulated proton therapy based on DVH control points. The results will show optimization results of a phantom case and a brain tumor case. Conclusion: The multicriteria optimization of the intensity modulated radiation therapy using pencil proton beam scanning provides a novel tool for treatment planning. Work support by a grant from Varian Inc.« less
An Algorithm to Compress Line-transition Data for Radiative-transfer Calculations
NASA Astrophysics Data System (ADS)
Cubillos, Patricio E.
2017-11-01
Molecular line-transition lists are an essential ingredient for radiative-transfer calculations. With recent databases now surpassing the billion-line mark, handling them has become computationally prohibitive, due to both the required processing power and memory. Here I present a temperature-dependent algorithm to separate strong from weak line transitions, reformatting the large majority of the weaker lines into a cross-section data file, and retaining the detailed line-by-line information of the fewer strong lines. For any given molecule over the 0.3-30 μm range, this algorithm reduces the number of lines to a few million, enabling faster radiative-transfer computations without a significant loss of information. The final compression rate depends on how densely populated the spectrum is. I validate this algorithm by comparing Exomol’s HCN extinction-coefficient spectra between the complete (65 million line transitions) and compressed (7.7 million) line lists. Over the 0.6-33 μm range, the average difference between extinction-coefficient values is less than 1%. A Python/C implementation of this algorithm is open-source and available at https://github.com/pcubillos/repack. So far, this code handles the Exomol and HITRAN line-transition format.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.; Ray, Taylor J.
2013-01-01
Two versions of airborne wind profiling algorithms for the pulsed 2-micron coherent Doppler lidar system at NASA Langley Research Center in Virginia are presented. Each algorithm utilizes different number of line-of-sight (LOS) lidar returns while compensating the adverse effects of different coordinate systems between the aircraft and the Earth. One of the two algorithms APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) estimates wind products using two LOSs. The other algorithm utilizes five LOSs. The airborne lidar data were acquired during the NASA's Genesis and Rapid Intensification Processes (GRIP) campaign in 2010. The wind profile products from the two algorithms are compared with the dropsonde data to validate their results.
Validation of energy-weighted algorithm for radiation portal monitor using plastic scintillator.
Lee, Hyun Cheol; Shin, Wook-Geun; Park, Hyo Jun; Yoo, Do Hyun; Choi, Chang-Il; Park, Chang-Su; Kim, Hong-Suk; Min, Chul Hee
2016-01-01
To prevent illicit tracking of radionuclides, radiation portal monitor (RPM) systems employing plastic scintillators have been used in ports and airports. However, their poor energy resolution makes the discrimination of radioactive material inaccurate. In this study, an energy weight algorithm was validated to determine (133)Ba, (22)Na, (137)Cs, and (60)Co by using a plastic scintillator. The Compton edges of energy spectra were converted to peaks based on the algorithm. The peaks have a maximum error of 6% towards the theoretical Compton edge. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Dong-xia; Ye, Qian-wen
Out-of-band radiation suppression algorithm must be used efficiently for broadband aeronautical communication system in order not to interfere the operation of the existing systems in aviation L-Band. Based on the simple introduction of the broadband aeronautical multi-carrier communication (B-AMC) system model, several sidelobe suppression techniques in orthogonal frequency multiplexing (OFDM) system are presented and analyzed so as to find a suitable algorithm for B-AMC system in this paper. Simulation results show that raise-cosine function windowing can suppress the out-of-band radiation of B-AMC system effectively.
NASA Astrophysics Data System (ADS)
L'Ecuyer, T.; McGarragh, G.; Ellis, T.; Stephens, G.; Olson, W.; Grecu, M.; Shie, C.; Jiang, X.; Waliser, D.; Li, J.; Tian, B.
2008-05-01
It is widely recognized that clouds and precipitation exert a profound influence on the propagation of radiation through the Earth's atmosphere. In fact, feedbacks between clouds, radiation, and precipitation represent one of the most important unresolved factors inhibiting our ability to predict the consequences of global climate change. Since its launch in late 1997, the Tropical Rainfall Measuring Mission (TRMM) has collected more than a decade of rainfall measurements that now form the gold standard of satellite-based precipitation estimates. Although not as widely advertised, the instruments aboard TRMM are also well-suited to the problem of characterizing the distribution of atmospheric heating in the tropics and a series of algorithms have recently been developed for estimating profiles of radiative and latent heating from these measurements. This presentation will describe a new multi-sensor tropical radiative heating product derived primarily from TRMM observations. Extensive evaluation of the products using a combination of ground and satellite-based observations is used to place the dataset in the context of existing techniques for quantifying atmospheric radiative heating. Highlights of several recent applications of the dataset will be presented that illustrate its utility for observation-based analysis of energy and water cycle variability on seasonal to inter-annual timescales and evaluating the representation of these processes in numerical models. Emphasis will be placed on the problem of understanding the impacts of clouds and precipitation on atmospheric heating on large spatial scales, one of the primary benefits of satellite observations like those provided by TRMM.
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Pandey, Dhirendra K.; Taylor, Deborah B.
1989-01-01
The Earth Radiation Budget Experiment (ERBE) is making high-absolute-accuracy measurements of the reflected solar and Earth-emitted radiation as well as the incoming solar radiation from three satellites: ERBS, NOAA-9, and NOAA-10. Each satellite has four Earth-looking nonscanning radiometers and three scanning radiometers. A fifth nonscanner, the solar monitor, measures the incoming solar radiation. The development of the ERBE sensor characterization procedures are described using the calibration data for each of the Earth-looking nonscanners and scanners. Sensor models for the ERBE radiometers are developed including the radiative exchange, conductive heat flow, and electronics processing for transient and steady state conditions. The steady state models are used to interpret the sensor outputs, resulting in the data reduction algorithms for the ERBE instruments. Both ground calibration and flight calibration procedures are treated and analyzed. The ground and flight calibration coefficients for the data reduction algorithms are presented.
NASA Astrophysics Data System (ADS)
Didari, Azadeh; Pinar Mengüç, M.
2017-08-01
Advances in nanotechnology and nanophotonics are inextricably linked with the need for reliable computational algorithms to be adapted as design tools for the development of new concepts in energy harvesting, radiative cooling, nanolithography and nano-scale manufacturing, among others. In this paper, we provide an outline for such a computational tool, named NF-RT-FDTD, to determine the near-field radiative transfer between structured surfaces using Finite Difference Time Domain method. NF-RT-FDTD is a direct and non-stochastic algorithm, which accounts for the statistical nature of the thermal radiation and is easily applicable to any arbitrary geometry at thermal equilibrium. We present a review of the fundamental relations for far- and near-field radiative transfer between different geometries with nano-scale surface and volumetric features and gaps, and then we discuss the details of the NF-RT-FDTD formulation, its application to sample geometries and outline its future expansion to more complex geometries. In addition, we briefly discuss some of the recent numerical works for direct and indirect calculations of near-field thermal radiation transfer, including Scattering Matrix method, Finite Difference Time Domain method (FDTD), Wiener Chaos Expansion, Fluctuating Surface Current (FSC), Fluctuating Volume Current (FVC) and Thermal Discrete Dipole Approximations (TDDA).
Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints
NASA Astrophysics Data System (ADS)
Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena
2012-04-01
Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an effective way for us to enhance the image quality at the matched regions between the prior and current images compared to the existing PICCS algorithm. Compared to the current CBCT imaging protocols, the APICCS algorithm allows an imaging dose reduction of 10-40 times due to the greatly reduced number of projections and lower x-ray tube current level coming from the low-dose protocol.
An 'adding' algorithm for the Markov chain formalism for radiation transfer
NASA Technical Reports Server (NTRS)
Esposito, L. W.
1979-01-01
An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.
Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo
2012-12-01
The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.
Sandulache, Vlad C; Chen, Yunyun; Lee, Jaehyuk; Rubinstein, Ashley; Ramirez, Marc S; Skinner, Heath D; Walker, Christopher M; Williams, Michelle D; Tailor, Ramesh; Court, Laurence E; Bankson, James A; Lai, Stephen Y
2014-01-01
Ionizing radiation (IR) cytotoxicity is primarily mediated through reactive oxygen species (ROS). Since tumor cells neutralize ROS by utilizing reducing equivalents, we hypothesized that measurements of reducing potential using real-time hyperpolarized (HP) magnetic resonance spectroscopy (MRS) and spectroscopic imaging (MRSI) can serve as a surrogate marker of IR induced ROS. This hypothesis was tested in a pre-clinical model of anaplastic thyroid carcinoma (ATC), an aggressive head and neck malignancy. Human ATC cell lines were utilized to test IR effects on ROS and reducing potential in vitro and [1-¹³C] pyruvate HP-MRS/MRSI imaging of ATC orthotopic xenografts was used to study in vivo effects of IR. IR increased ATC intra-cellular ROS levels resulting in a corresponding decrease in reducing equivalent levels. Exogenous manipulation of cellular ROS and reducing equivalent levels altered ATC radiosensitivity in a predictable manner. Irradiation of ATC xenografts resulted in an acute drop in reducing potential measured using HP-MRS, reflecting the shunting of reducing equivalents towards ROS neutralization. Residual tumor tissue post irradiation demonstrated heterogeneous viability. We have adapted HP-MRS/MRSI to non-invasively measure IR mediated changes in tumor reducing potential in real time. Continued development of this technology could facilitate the development of an adaptive clinical algorithm based on real-time adjustments in IR dose and dose mapping.
NASA Astrophysics Data System (ADS)
Pramitasari, D. A.; Gondhowiardjo, S.; Nuranna, L.
2017-08-01
This study aimed to compare radiation only or chemo radiation treatment of local advanced cervical cancers by examining the initial response of tumors and acute side effects. An initial assessment employed value based medicine (VBM) by obtaining utility values for both types of therapy. The incidences of acute lower gastrointestinal, genitourinary, and hematology side effects in patients undergoing chemoradiation did not differ significantly from those undergoing radiation alone. Utility values for patients who underwent radiation alone were higher compared to those who underwent chemoradiation. It was concluded that the complete response of patients who underwent chemoradiation did not differ significantly from those who underwent radiation alone.
Madan, Jason; Khan, Kamran A; Petrou, Stavros; Lamb, Sarah E
2017-05-01
Mapping algorithms are increasingly being used to predict health-utility values based on responses or scores from non-preference-based measures, thereby informing economic evaluations. We explored whether predictions in the EuroQol 5-dimension 3-level instrument (EQ-5D-3L) health-utility gains from mapping algorithms might differ if estimated using differenced versus raw scores, using the Roland-Morris Disability Questionnaire (RMQ), a widely used health status measure for low back pain, as an example. We estimated algorithms mapping within-person changes in RMQ scores to changes in EQ-5D-3L health utilities using data from two clinical trials with repeated observations. We also used logistic regression models to estimate response mapping algorithms from these data to predict within-person changes in responses to each EQ-5D-3L dimension from changes in RMQ scores. Predicted health-utility gains from these mappings were compared with predictions based on raw RMQ data. Using differenced scores reduced the predicted health-utility gain from a unit decrease in RMQ score from 0.037 (standard error [SE] 0.001) to 0.020 (SE 0.002). Analysis of response mapping data suggests that the use of differenced data reduces the predicted impact of reducing RMQ scores across EQ-5D-3L dimensions and that patients can experience health-utility gains on the EQ-5D-3L 'usual activity' dimension independent from improvements captured by the RMQ. Mappings based on raw RMQ data overestimate the EQ-5D-3L health utility gains from interventions that reduce RMQ scores. Where possible, mapping algorithms should reflect within-person changes in health outcome and be estimated from datasets containing repeated observations if they are to be used to estimate incremental health-utility gains.
Zhou, Wenliang; Yang, Xia; Deng, Lianbo
2014-01-01
Not only is the operating plan the basis of organizing marshalling station's operation, but it is also used to analyze in detail the capacity utilization of each facility in marshalling station. In this paper, a long-term operating plan is optimized mainly for capacity utilization analysis. Firstly, a model is developed to minimize railcars' average staying time with the constraints of minimum time intervals, marshalling track capacity, and so forth. Secondly, an algorithm is designed to solve this model based on genetic algorithm (GA) and simulation method. It divides the plan of whole planning horizon into many subplans, and optimizes them with GA one by one in order to obtain a satisfactory plan with less computing time. Finally, some numeric examples are constructed to analyze (1) the convergence of the algorithm, (2) the effect of some algorithm parameters, and (3) the influence of arrival train flow on the algorithm. PMID:25525614
Silva, Leonardo W T; Barros, Vitor F; Silva, Sandro G
2014-08-18
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence.
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Wang, Wei; Tan, He-Ping
2015-11-01
A hybrid least-square QR decomposition (LSQR)-particle swarm optimization (LSQR-PSO) algorithm was developed to estimate the three-dimensional (3D) temperature distributions and absorption coefficients simultaneously. The outgoing radiative intensities at the boundary surface of the absorbing media were simulated by the line-of-sight (LOS) method, which served as the input for the inverse analysis. The retrieval results showed that the 3D temperature distributions of the participating media with known radiative properties could be retrieved accurately using the LSQR algorithm, even with noisy data. For the participating media with unknown radiative properties, the 3D temperature distributions and absorption coefficients could be retrieved accurately using the LSQR-PSO algorithm even with measurement errors. It was also found that the temperature field could be estimated more accurately than the absorption coefficients. In order to gain insight into the effects on the accuracy of temperature distribution reconstruction, the selection of the detection direction and the angle between two detection directions was also analyzed. Project supported by the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), the National Natural Science Foundation of China (Grant No. 51476043), and the Fund of Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation University of China.
Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.
2014-01-01
In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013
NASA Astrophysics Data System (ADS)
Chen, Guangye; Chacón, Luis; CoCoMans Team
2014-10-01
For decades, the Vlasov-Darwin model has been recognized to be attractive for PIC simulations (to avoid radiative noise issues) in non-radiative electromagnetic regimes. However, the Darwin model results in elliptic field equations that renders explicit time integration unconditionally unstable. Improving on linearly implicit schemes, fully implicit PIC algorithms for both electrostatic and electromagnetic regimes, with exact discrete energy and charge conservation properties, have been recently developed in 1D. This study builds on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the particle-field equations in multiple dimensions. The algorithm conserves energy, charge, and canonical-momentum exactly, even with grid packing. A simple fluid preconditioner allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. We demonstrate the accuracy and efficiency properties of the of the algorithm with various numerical experiments in 2D3V.
Studies of the net surface radiative flux from satellite radiances during FIFE
NASA Technical Reports Server (NTRS)
Frouin, Robert
1993-01-01
Studies of the net surface radiative flux from satellite radiances during First ISLSCP Field Experiment (FIFE) are presented. Topics covered include: radiative transfer model validation; calibration of VISSR and AVHRR solar channels; development and refinement of algorithms to estimate downward solar and terrestrial irradiances at the surface, including photosynthetically available radiation (PAR) and surface albedo; verification of these algorithms using in situ measurements; production of maps of shortwave irradiance, surface albedo, and related products; analysis of the temporal variability of shortwave irradiance over the FIFE site; development of a spectroscopy technique to estimate atmospheric total water vapor amount; and study of optimum linear combinations of visible and near-infrared reflectances for estimating the fraction of PAR absorbed by plants.
Facilitating Follow-up of LIGO-Virgo Events Using Rapid Sky Localization
NASA Astrophysics Data System (ADS)
Chen, Hsin-Yu; Holz, Daniel E.
2017-05-01
We discuss an algorithm for accurate and very low-latency (<1 s) localization of gravitational-wave (GW) sources using only the relative times of arrival, relative phases, and relative signal-to-noise ratios for pairs of detectors. The algorithm is independent of distances and masses to leading order, and can be generalized to all discrete (as opposed to stochastic and continuous) sources detected by ground-based detector networks. Our approach is similar to that of BAYESTAR with a few modifications, which result in increased computational efficiency. For the LIGO two-detector configuration (Hanford+Livingston) operating in O1 we find a median 50% (90%) localization of 143 deg2 (558 deg2) for binary neutron stars. We use our algorithm to explore the improvement in localization resulting from loud events, finding that the loudest out of the first 4 (or 10) events reduces the median sky-localization area by a factor of 1.9 (3.0) for the case of two GW detectors, and 2.2 (4.0) for three detectors. We also consider the case of multi-messenger joint detections in both the gravitational and the electromagnetic radiation, and show that joint localization can offer significant improvements (e.g., in the case of LIGO and Fermi/GBM joint detections). We show that a prior on the binary inclination, potentially arising from GRB observations, has a negligible effect on GW localization. Our algorithm is simple, fast, and accurate, and may be of particular utility in the development of multi-messenger astronomy.
Far-field radiation patterns of aperture antennas by the Winograd Fourier transform algorithm
NASA Technical Reports Server (NTRS)
Heisler, R.
1978-01-01
A more time-efficient algorithm for computing the discrete Fourier transform, the Winograd Fourier transform (WFT), is described. The WFT algorithm is compared with other transform algorithms. Results indicate that the WFT algorithm in antenna analysis appears to be a very successful application. Significant savings in cpu time will improve the computer turn around time and circumvent the need to resort to weekend runs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.
In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less
A preliminary study to metaheuristic approach in multilayer radiation shielding optimization
NASA Astrophysics Data System (ADS)
Arif Sazali, Muhammad; Rashid, Nahrul Khair Alang Md; Hamzah, Khaidzir
2018-01-01
Metaheuristics are high-level algorithmic concepts that can be used to develop heuristic optimization algorithms. One of their applications is to find optimal or near optimal solutions to combinatorial optimization problems (COPs) such as scheduling, vehicle routing, and timetabling. Combinatorial optimization deals with finding optimal combinations or permutations in a given set of problem components when exhaustive search is not feasible. A radiation shield made of several layers of different materials can be regarded as a COP. The time taken to optimize the shield may be too high when several parameters are involved such as the number of materials, the thickness of layers, and the arrangement of materials. Metaheuristics can be applied to reduce the optimization time, trading guaranteed optimal solutions for near-optimal solutions in comparably short amount of time. The application of metaheuristics for radiation shield optimization is lacking. In this paper, we present a review on the suitability of using metaheuristics in multilayer shielding design, specifically the genetic algorithm and ant colony optimization algorithm (ACO). We would also like to propose an optimization model based on the ACO method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicolae, A; Department of Physics, Ryerson University, Toronto, ON; Lu, L
Purpose: A novel, automated, algorithm for permanent prostate brachytherapy (PPB) treatment planning has been developed. The novel approach uses machine-learning (ML), a form of artificial intelligence, to substantially decrease planning time while simultaneously retaining the clinical intuition of plans created by radiation oncologists. This study seeks to compare the ML algorithm against expert-planned PPB plans to evaluate the equivalency of dosimetric and clinical plan quality. Methods: Plan features were computed from historical high-quality PPB treatments (N = 100) and stored in a relational database (RDB). The ML algorithm matched new PPB features to a highly similar case in the RDB;more » this initial plan configuration was then further optimized using a stochastic search algorithm. PPB pre-plans (N = 30) generated using the ML algorithm were compared to plan variants created by an expert dosimetrist (RT), and radiation oncologist (MD). Planning time and pre-plan dosimetry were evaluated using a one-way Student’s t-test and ANOVA, respectively (significance level = 0.05). Clinical implant quality was evaluated by expert PPB radiation oncologists as part of a qualitative study. Results: Average planning time was 0.44 ± 0.42 min compared to 17.88 ± 8.76 min for the ML algorithm and RT, respectively, a significant advantage [t(9), p = 0.01]. A post-hoc ANOVA [F(2,87) = 6.59, p = 0.002] using Tukey-Kramer criteria showed a significantly lower mean prostate V150% for the ML plans (52.9%) compared to the RT (57.3%), and MD (56.2%) plans. Preliminary qualitative study results indicate comparable clinical implant quality between RT and ML plans with a trend towards preference for ML plans. Conclusion: PPB pre-treatment plans highly comparable to those of an expert radiation oncologist can be created using a novel ML planning model. The use of an ML-based planning approach is expected to translate into improved PPB accessibility and plan uniformity.« less
2009-01-01
proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6
Utilization of SRNL-developed radiation-resistant polymer in high radiation environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skibo, A.
The radiation-resistant polymer developed by the Savannah River National Laboratory is adaptable for multiple applications to enhance polymer endurance and effectiveness in radiation environments. SRNL offers to collaborate with TEPCO in evaluation, testing, and utilization of SRNL’s radiation-resistant polymer in the D&D of the Fukushima Daiichi NPS. Refinement of the scope and associated costs will be conducted in consultation with TECPO.
Herman, Gabor T; Chen, Wei
2008-03-01
The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.
Doble, Brett; Lorgelly, Paula
2016-04-01
To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.
Boyer, Nicole R S; Miller, Sarah; Connolly, Paul; McIntosh, Emma
2016-04-01
The Strengths and Difficulties Questionnaire (SDQ) is a behavioural screening tool for children. The SDQ is increasingly used as the primary outcome measure in population health interventions involving children, but it is not preference based; therefore, its role in allocative economic evaluation is limited. The Child Health Utility 9D (CHU9D) is a generic preference-based health-related quality of-life measure. This study investigates the applicability of the SDQ outcome measure for use in economic evaluations and examines its relationship with the CHU9D by testing previously published mapping algorithms. The aim of the paper is to explore the feasibility of using the SDQ within economic evaluations of school-based population health interventions. Data were available from children participating in a cluster randomised controlled trial of the school-based roots of empathy programme in Northern Ireland. Utility was calculated using the original and alternative CHU9D tariffs along with two SDQ mapping algorithms. t tests were performed for pairwise differences in utility values from the preference-based tariffs and mapping algorithms. Mean (standard deviation) SDQ total difficulties and prosocial scores were 12 (3.2) and 8.3 (2.1). Utility values obtained from the original tariff, alternative tariff, and mapping algorithms using five and three SDQ subscales were 0.84 (0.11), 0.80 (0.13), 0.84 (0.05), and 0.83 (0.04), respectively. Each method for calculating utility produced statistically significantly different values except the original tariff and five SDQ subscale algorithm. Initial evidence suggests the SDQ and CHU9D are related in some of their measurement properties. The mapping algorithm using five SDQ subscales was found to be optimal in predicting mean child health utility. Future research valuing changes in the SDQ scores would contribute to this research.
A radiation and energy budget algorithm for forest canopies
NASA Astrophysics Data System (ADS)
Tunick, A.
2006-01-01
Previously, it was shown that a one-dimensional, physics-based (conservation-law) computer model can provide a useful mathematical representation of the wind flow, temperatures, and turbulence inside and above a uniform forest stand. A key element of this calculation was a radiation and energy budget algorithm (implemented to predict the heat source). However, to keep the earlier publication brief, a full description of the radiation and energy budget algorithm was not given. Hence, this paper presents our equation set for calculating the incoming total radiation at the canopy top as well as the transmission, reflection, absorption, and emission of the solar flux through a forest stand. In addition, example model output is presented from three interesting numerical experiments, which were conducted to simulate the canopy microclimate for a forest stand that borders the Blossom Point Field Test Facility (located near La Plata, Maryland along the Potomac River). It is anticipated that the current numerical study will be useful to researchers and experimental planners who will be collecting acoustic and meteorological data at the Blossom Point Facility in the near future.
NASA Technical Reports Server (NTRS)
Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri
1992-01-01
The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.
Ye, Huimin; Chen, Elizabeth S.
2011-01-01
In order to support the increasing need to share electronic health data for research purposes, various methods have been proposed for privacy preservation including k-anonymity. Many k-anonymity models provide the same level of anoymization regardless of practical need, which may decrease the utility of the dataset for a particular research study. In this study, we explore extensions to the k-anonymity algorithm that aim to satisfy the heterogeneous needs of different researchers while preserving privacy as well as utility of the dataset. The proposed algorithm, Attribute Utility Motivated k-anonymization (AUM), involves analyzing the characteristics of attributes and utilizing them to minimize information loss during the anonymization process. Through comparison with two existing algorithms, Mondrian and Incognito, preliminary results indicate that AUM may preserve more information from original datasets thus providing higher quality results with lower distortion. PMID:22195223
Operational Planning for Multiple Heterogeneous Unmanned Aerial Vehicles in Three Dimensions
2009-06-01
human input in the planning process. Two solution methods are presented: (1) a mixed-integer program, and (2) an algorithm that utilizes a metaheuristic ...and (2) an algorithm that utilizes a metaheuristic to generate composite variables for a linear program, called the Composite Operations Planning...that represent a path and an associated type of UAV. The reformulation is incorporated into an algorithm that uses a metaheuristic to generate the
Multisource Estimation of Long-term Global Terrestrial Surface Radiation
NASA Astrophysics Data System (ADS)
Peng, L.; Sheffield, J.
2017-12-01
Land surface net radiation is the essential energy source at the earth's surface. It determines the surface energy budget and its partitioning, drives the hydrological cycle by providing available energy, and offers heat, light, and energy for biological processes. Individual components in net radiation have changed historically due to natural and anthropogenic climate change and land use change. Decadal variations in radiation such as global dimming or brightening have important implications for hydrological and carbon cycles. In order to assess the trends and variability of net radiation and evapotranspiration, there is a need for accurate estimates of long-term terrestrial surface radiation. While large progress in measuring top of atmosphere energy budget has been made, huge discrepancies exist among ground observations, satellite retrievals, and reanalysis fields of surface radiation, due to the lack of observational networks, the difficulty in measuring from space, and the uncertainty in algorithm parameters. To overcome the weakness of single source datasets, we propose a multi-source merging approach to fully utilize and combine multiple datasets of radiation components separately, as they are complementary in space and time. First, we conduct diagnostic analysis of multiple satellite and reanalysis datasets based on in-situ measurements such as Global Energy Balance Archive (GEBA), existing validation studies, and other information such as network density and consistency with other meteorological variables. Then, we calculate the optimal weighted average of multiple datasets by minimizing the variance of error between in-situ measurements and other observations. Finally, we quantify the uncertainties in the estimates of surface net radiation and employ physical constraints based on the surface energy balance to reduce these uncertainties. The final dataset is evaluated in terms of the long-term variability and its attribution to changes in individual components. The goal of this study is to provide a merged observational benchmark for large-scale diagnostic analyses, remote sensing and land surface modeling.
Mapping from disease-specific measures to health-state utility values in individuals with migraine.
Gillard, Patrick J; Devine, Beth; Varon, Sepideh F; Liu, Lei; Sullivan, Sean D
2012-05-01
The objective of this study was to develop empirical algorithms that estimate health-state utility values from disease-specific quality-of-life scores in individuals with migraine. Data from a cross-sectional, multicountry study were used. Individuals with episodic and chronic migraine were randomly assigned to training or validation samples. Spearman's correlation coefficients between paired EuroQol five-dimensional (EQ-5D) questionnaire utility values and both Headache Impact Test (HIT-6) scores and Migraine-Specific Quality-of-Life Questionnaire version 2.1 (MSQ) domain scores (role restrictive, role preventive, and emotional function) were examined. Regression models were constructed to estimate EQ-5D questionnaire utility values from the HIT-6 score or the MSQ domain scores. Preferred algorithms were confirmed in the validation samples. In episodic migraine, the preferred HIT-6 and MSQ algorithms explained 22% and 25% of the variance (R(2)) in the training samples, respectively, and had similar prediction errors (root mean square errors of 0.30). In chronic migraine, the preferred HIT-6 and MSQ algorithms explained 36% and 45% of the variance in the training samples, respectively, and had similar prediction errors (root mean square errors 0.31 and 0.29). In episodic and chronic migraine, no statistically significant differences were observed between the mean observed and the mean estimated EQ-5D questionnaire utility values for the preferred HIT-6 and MSQ algorithms in the validation samples. The relationship between the EQ-5D questionnaire and the HIT-6 or the MSQ is adequate to use regression equations to estimate EQ-5D questionnaire utility values. The preferred HIT-6 and MSQ algorithms will be useful in estimating health-state utilities in migraine trials in which no preference-based measure is present. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Algorithms for radiative transfer simulations for aerosol retrieval
NASA Astrophysics Data System (ADS)
Mukai, Sonoyo; Sano, Itaru; Nakata, Makiko
2012-11-01
Aerosol retrieval work from satellite data, i.e. aerosol remote sensing, is divided into three parts as: satellite data analysis, aerosol modeling and multiple light scattering calculation in the atmosphere model which is called radiative transfer simulation. The aerosol model is compiled from the accumulated measurements during more than ten years provided with the world wide aerosol monitoring network (AERONET). The radiative transfer simulations take Rayleigh scattering by molecules and Mie scattering by aerosols in the atmosphere, and reflection by the Earth surface into account. Thus the aerosol properties are estimated by comparing satellite measurements with the numerical values of radiation simulations in the Earth-atmosphere-surface model. It is reasonable to consider that the precise simulation of multiple light-scattering processes is necessary, and needs a long computational time especially in an optically thick atmosphere model. Therefore efficient algorithms for radiative transfer problems are indispensable to retrieve aerosols from space.
NASA Astrophysics Data System (ADS)
Valdes, Gilmer; Solberg, Timothy D.; Heskel, Marina; Ungar, Lyle; Simone, Charles B., II
2016-08-01
To develop a patient-specific ‘big data’ clinical decision tool to predict pneumonitis in stage I non-small cell lung cancer (NSCLC) patients after stereotactic body radiation therapy (SBRT). 61 features were recorded for 201 consecutive patients with stage I NSCLC treated with SBRT, in whom 8 (4.0%) developed radiation pneumonitis. Pneumonitis thresholds were found for each feature individually using decision stumps. The performance of three different algorithms (Decision Trees, Random Forests, RUSBoost) was evaluated. Learning curves were developed and the training error analyzed and compared to the testing error in order to evaluate the factors needed to obtain a cross-validated error smaller than 0.1. These included the addition of new features, increasing the complexity of the algorithm and enlarging the sample size and number of events. In the univariate analysis, the most important feature selected was the diffusion capacity of the lung for carbon monoxide (DLCO adj%). On multivariate analysis, the three most important features selected were the dose to 15 cc of the heart, dose to 4 cc of the trachea or bronchus, and race. Higher accuracy could be achieved if the RUSBoost algorithm was used with regularization. To predict radiation pneumonitis within an error smaller than 10%, we estimate that a sample size of 800 patients is required. Clinically relevant thresholds that put patients at risk of developing radiation pneumonitis were determined in a cohort of 201 stage I NSCLC patients treated with SBRT. The consistency of these thresholds can provide radiation oncologists with an estimate of their reliability and may inform treatment planning and patient counseling. The accuracy of the classification is limited by the number of patients in the study and not by the features gathered or the complexity of the algorithm.
Planning and delivery of four-dimensional radiation therapy with multileaf collimators
NASA Astrophysics Data System (ADS)
McMahon, Ryan L.
This study is an investigation of the application of multileaf collimators (MLCs) to the treatment of moving anatomy with external beam radiation therapy. First, a method for delivering intensity modulated radiation therapy (IMRT) to moving tumors is presented. This method uses an MLC control algorithm that calculates appropriate MLC leaf speeds in response to feedback from real-time imaging. The algorithm does not require a priori knowledge of a tumor's motion, and is based on the concept of self-correcting DMLC leaf trajectories . This gives the algorithm the distinct advantage of allowing for correction of DMLC delivery errors without interrupting delivery. The algorithm is first tested for the case of one-dimensional (1D) rigid tumor motion in the beam's eye view (BEV). For this type of motion, it is shown that the real-time tracking algorithm results in more accurate deliveries, with respect to delivered intensity, than those which ignore motion altogether. This is followed by an appropriate extension of the algorithm to two-dimensional (2D) rigid motion in the BEV. For this type of motion, it is shown that the 2D real-time tracking algorithm results in improved accuracy (in the delivered intensity) in comparison to deliveries which ignore tumor motion or only account for tumor motion which is aligned with MLC leaf travel. Finally, a method is presented for designing DMLC leaf trajectories which deliver a specified intensity over a moving tumor without overexposing critical structures which exhibit motion patterns that differ from that of the tumor. In addition to avoiding overexposure of critical organs, the method can, in the case shown, produce deliveries that are superior to anything achievable using stationary anatomy. In this regard, the method represents a systematic way to include anatomical motion as a degree of freedom in the optimization of IMRT while producing treatment plans that are deliverable with currently available technology. These results, combined with those related to the real-time MLC tracking algorithm, show that an MLC is a promising tool to investigate for the delivery of four-dimensional radiation therapy.
Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J.
2015-01-01
Abstract. Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of 100 μm. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a 5×5 array of 200 μm pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent K-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of 194 μm, with 2×2 binning during the acquisition giving an effective pixel size of 388 μm. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors. PMID:26158095
Makeev, Andrey; Clajus, Martin; Snyder, Scott; Wang, Xiaolang; Glick, Stephen J
2015-04-01
Semiconductor photon-counting detectors based on high atomic number, high density materials [cadmium zinc telluride (CZT)/cadmium telluride (CdTe)] for x-ray computed tomography (CT) provide advantages over conventional energy-integrating detectors, including reduced electronic and Swank noise, wider dynamic range, capability of spectral CT, and improved signal-to-noise ratio. Certain CT applications require high spatial resolution. In breast CT, for example, visualization of microcalcifications and assessment of tumor microvasculature after contrast enhancement require resolution on the order of [Formula: see text]. A straightforward approach to increasing spatial resolution of pixellated CZT-based radiation detectors by merely decreasing the pixel size leads to two problems: (1) fabricating circuitry with small pixels becomes costly and (2) inter-pixel charge spreading can obviate any improvement in spatial resolution. We have used computer simulations to investigate position estimation algorithms that utilize charge sharing to achieve subpixel position resolution. To study these algorithms, we model a simple detector geometry with a [Formula: see text] array of [Formula: see text] pixels, and use a conditional probability function to model charge transport in CZT. We used COMSOL finite element method software to map the distribution of charge pulses and the Monte Carlo package PENELOPE for simulating fluorescent radiation. Performance of two x-ray interaction position estimation algorithms was evaluated: the method of maximum-likelihood estimation and a fast, practical algorithm that can be implemented in a readout application-specific integrated circuit and allows for identification of a quadrant of the pixel in which the interaction occurred. Both methods demonstrate good subpixel resolution; however, their actual efficiency is limited by the presence of fluorescent [Formula: see text]-escape photons. Current experimental breast CT systems typically use detectors with a pixel size of [Formula: see text], with [Formula: see text] binning during the acquisition giving an effective pixel size of [Formula: see text]. Thus, it would be expected that the position estimate accuracy reported in this study would improve detection and visualization of microcalcifications as compared to that with conventional detectors.
Airfoil optimization for unsteady flows with application to high-lift noise reduction
NASA Astrophysics Data System (ADS)
Rumpfkeil, Markus Peer
The use of steady-state aerodynamic optimization methods in the computational fluid dynamic (CFD) community is fairly well established. In particular, the use of adjoint methods has proven to be very beneficial because their cost is independent of the number of design variables. The application of numerical optimization to airframe-generated noise, however, has not received as much attention, but with the significant quieting of modern engines, airframe noise now competes with engine noise. Optimal control techniques for unsteady flows are needed in order to be able to reduce airframe-generated noise. In this thesis, a general framework is formulated to calculate the gradient of a cost function in a nonlinear unsteady flow environment via the discrete adjoint method. The unsteady optimization algorithm developed in this work utilizes a Newton-Krylov approach since the gradient-based optimizer uses the quasi-Newton method BFGS, Newton's method is applied to the nonlinear flow problem, GMRES is used to solve the resulting linear problem inexactly, and last but not least the linear adjoint problem is solved using Bi-CGSTAB. The flow is governed by the unsteady two-dimensional compressible Navier-Stokes equations in conjunction with a one-equation turbulence model, which are discretized using structured grids and a finite difference approach. The effectiveness of the unsteady optimization algorithm is demonstrated by applying it to several problems of interest including shocktubes, pulses in converging-diverging nozzles, rotating cylinders, transonic buffeting, and an unsteady trailing-edge flow. In order to address radiated far-field noise, an acoustic wave propagation program based on the Ffowcs Williams and Hawkings (FW-H) formulation is implemented and validated. The general framework is then used to derive the adjoint equations for a novel hybrid URANS/FW-H optimization algorithm in order to be able to optimize the shape of airfoils based on their calculated far-field pressure fluctuations. Validation and application results for this novel hybrid URANS/FW-H optimization algorithm show that it is possible to optimize the shape of an airfoil in an unsteady flow environment to minimize its radiated far-field noise while maintaining good aerodynamic performance.
Padgett, Kyle R; Stoyanova, Radka; Pirozzi, Sara; Johnson, Perry; Piper, Jon; Dogan, Nesrin; Pollack, Alan
2018-03-01
Validating deformable multimodality image registrations is challenging due to intrinsic differences in signal characteristics and their spatial intensity distributions. Evaluating multimodality registrations using these spatial intensity distributions is also complicated by the fact that these metrics are often employed in the registration optimization process. This work evaluates rigid and deformable image registrations of the prostate in between diagnostic-MRI and radiation treatment planning-CT by utilizing a planning-MRI after fiducial marker placement as a surrogate. The surrogate allows for the direct quantitative analysis that can be difficult in the multimodality domain. For thirteen prostate patients, T2 images were acquired at two different time points, the first several weeks prior to planning (diagnostic-MRI) and the second on the same day as the planning-CT (planning-MRI). The diagnostic-MRI was deformed to the planning-CT utilizing a commercially available algorithm which synthesizes a deformable image registration (DIR) algorithm from local rigid registrations. The planning-MRI provided an independent surrogate for the planning-CT for assessing registration accuracy using image similarity metrics, including Pearson correlation and normalized mutual information (NMI). A local analysis was performed by looking only within the prostate, proximal seminal vesicles, penile bulb, and combined areas. The planning-MRI provided an excellent surrogate for the planning-CT with residual error in fiducial alignment between the two datasets being submillimeter, 0.78 mm. DIR was superior to the rigid registration in 11 of 13 cases demonstrating a 27.37% improvement in NMI (P < 0.009) within a regional area surrounding the prostate and associated critical organs. Pearson correlations showed similar results, demonstrating a 13.02% improvement (P < 0.013). By utilizing the planning-MRI as a surrogate for the planning-CT, an independent evaluation of registration accuracy is possible. This population provides an ideal testing ground for MRI to CT DIR by obviating the need for multimodality comparisons which are inherently more challenging. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Deriving Daily Time Series Evapotranspiration, Evaporation and Transpiration Maps With Landsat Data
NASA Astrophysics Data System (ADS)
Paul, G.; Gowda, P. H.; Marek, T.; Xiao, X.; Basara, J. B.
2014-12-01
Mapping high resolution evapotranspiration (ET) over large region at daily time step is complex and computationally intensive. Utility of high resolution daily ET maps are large ranging from crop water management to watershed management. The aim of this work is to generate daily time series (10 years) ET and its components vegetation transpiration (T) and soil water evaporation (E) maps using Landsat 5 satellite data for Southern Great Plains forage-rangeland-winter wheat production system in Oklahoma (OK). Framework for generating these products included the two source energy balance (TSEB) algorithm and other important features were: (a) atmospheric correction algorithm; (b) spatially interpolated weather inputs; (c) functions for varying Priestley-Taylor coefficient; and (d) ET, E and T extrapolating algorithm utilizing reference ET. An extensive network of 140 weather stations managed by Oklahoma Mesonet was utilized to generate spatially interpolated inputs of air temperature, relative humidity, wind speed, solar radiation, pressure, and reference ET. Validation of the ET maps were done against eddy covariance data from two grassland sites at El Reno, OK suggested good performance (Table 1). Figure 1 illustrates a daily ET map for a very small subset of 18thJuly 2006 ET map, where difference in ET among different land uses such as the irrigated cropland, vegetation along drainage, and grassland is very distinct. Results indicated that the proposed ET mapping framework is suitable for deriving high resolution time series daily ET maps at regional scale with Landsat Thematic Mapper data. . Table 1: Daily actual ET performance statistics for two grassland locations at El Reno OK for year 2005 . Management Type Mean (obs) (mm d-1) Mean (est) (mm d-1) MBE (mm d-1) % MBE (%) RMSE (mm d-1) RMSE (%) MAE (mm d-1) MAPD (%) NSE R2 Control 2.2 1.8 -0.43 -19.4 0.87 38.9 0.65 29.5 0.71 0.79 Burnt 2.0 1.8 -0.15 -7.7 0.80 39.8 0.62 30.7 0.73 0.77
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zawisza, I; Yan, H; Yin, F
Purpose: To assure that tumor motion is within the radiation field during high-dose and high-precision radiosurgery, real-time imaging and surrogate monitoring are employed. These methods are useful in providing real-time tumor/surrogate motion but no future information is available. In order to anticipate future tumor/surrogate motion and track target location precisely, an algorithm is developed and investigated for estimating surrogate motion multiple-steps ahead. Methods: The study utilized a one-dimensional surrogate motion signal divided into three components: (a) training component containing the primary data including the first frame to the beginning of the input subsequence; (b) input subsequence component of the surrogatemore » signal used as input to the prediction algorithm: (c) output subsequence component is the remaining signal used as the known output of the prediction algorithm for validation. The prediction algorithm consists of three major steps: (1) extracting subsequences from training component which best-match the input subsequence according to given criterion; (2) calculating weighting factors from these best-matched subsequence; (3) collecting the proceeding parts of the subsequences and combining them together with assigned weighting factors to form output. The prediction algorithm was examined for several patients, and its performance is assessed based on the correlation between prediction and known output. Results: Respiratory motion data was collected for 20 patients using the RPM system. The output subsequence is the last 50 samples (∼2 seconds) of a surrogate signal, and the input subsequence was 100 (∼3 seconds) frames prior to the output subsequence. Based on the analysis of correlation coefficient between predicted and known output subsequence, the average correlation is 0.9644±0.0394 and 0.9789±0.0239 for equal-weighting and relative-weighting strategies, respectively. Conclusion: Preliminary results indicate that the prediction algorithm is effective in estimating surrogate motion multiple-steps in advance. Relative-weighting method shows better prediction accuracy than equal-weighting method. More parameters of this algorithm are under investigation.« less
SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maitree, R; Guzman, G; Chundury, A
Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness ofmore » available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and qualitative assessment, the proposed approach has superior noise reduction and anatomical structures preservation capabilities over existing noise removal methods. Senior Author Dr. Deshan Yang received research funding form ViewRay and Varian.« less
NASA Astrophysics Data System (ADS)
Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu
2018-01-01
Based on the study of earth infrared radiation and further requirement of anticloud interference ability for a spinning projectile's infrared attitude measurement, a compensation method of cloud infrared radiation interference is proposed. First, the theoretical model of infrared radiation interference is established by analyzing the generation mechanism and interference characteristics of cloud infrared radiation. Then, the influence of cloud infrared radiation on attitude angle is calculated in the following two situations. The first situation is the projectile in cloud, and the maximum of roll angle error can reach ± 20 deg. The second situation is the projectile outside of cloud, and it results in the inability to measure the projectile's attitude angle. Finally, a multisensor weighted fusion algorithm is proposed based on trust function method to reduce the influence of cloud infrared radiation. The results of semiphysical experiments show that the error of roll angle with a weighted fusion algorithm can be kept within ± 0.5 deg in the presence of cloud infrared radiation interference. This proposed method improves the accuracy of roll angle by nearly four times in attitude measurement and also solves the problem of low accuracy of infrared radiation attitude measurement in navigation and guidance field.
NASA Technical Reports Server (NTRS)
Kyle, H. Lee; Hucek, Richard R.; Groveman, Brian; Frey, Richard
1990-01-01
The archived Earth radiation budget (ERB) products produced from the Nimbus-7 ERB narrow field-of-view scanner are described. The principal products are broadband outgoing longwave radiation (4.5 to 50 microns), reflected solar radiation (0.2 to 4.8 microns), and the net radiation. Daily and monthly averages are presented on a fixed global equal area (500 sq km), grid for the period May 1979 to May 1980. Two independent algorithms are used to estimate the outgoing fluxes from the observed radiances. The algorithms are described and the results compared. The products are divided into three subsets: the Scene Radiance Tapes (SRT) contain the calibrated radiances; the Sorting into Angular Bins (SAB) tape contains the SAB produced shortwave, longwave, and net radiation products; and the Maximum Likelihood Cloud Estimation (MLCE) tapes contain the MLCE products. The tape formats are described in detail.
Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael
2009-01-01
Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.
Spinning projectile's attitude measurement with LW infrared radiation under sea-sky background
NASA Astrophysics Data System (ADS)
Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu
2018-05-01
With the further development of infrared radiation research in sea-sky background and the requirement of spinning projectile's attitude measurement, the sea-sky infrared radiation field is used to carry out spinning projectile's attitude angle instead of inertial sensors. Firstly, the generation mechanism of sea-sky infrared radiation is analysed. The mathematical model of sea-sky infrared radiation is deduced in LW (long wave) infrared 8 ∼ 14 μm band by calculating the sea surface and sky infrared radiation. Secondly, according to the movement characteristics of spinning projectile, the attitude measurement model of infrared sensors on projectile's three axis is established. And the feasibility of the model is analysed by simulation. Finally, the projectile's attitude calculation algorithm is designed to improve the attitude angle estimation accuracy. The results of semi-physical experiments show that the segmented interactive algorithm estimation error of pitch and roll angle is within ±1.5°. The attitude measurement method is effective and feasible, and provides accurate measurement basis for the guidance of spinning projectile.
Normalization Of Thermal-Radiation Form-Factor Matrix
NASA Technical Reports Server (NTRS)
Tsuyuki, Glenn T.
1994-01-01
Report describes algorithm that adjusts form-factor matrix in TRASYS computer program, which calculates intraspacecraft radiative interchange among various surfaces and environmental heat loading from sources such as sun.
Ray Tracing Through Non-Imaging Concentrators
NASA Astrophysics Data System (ADS)
Greynolds, Alan W.
1984-01-01
A generalized algorithm for tracing rays through both imaging and non-imaging radiation collectors is presented. A computer program based on the algorithm is then applied to analyzing various two-stage Winston concentrators.
1993-01-01
Panasonic TLD . Panasonic Industrial Company; Secaucus, New Jersey. 5. Thurlow, Ronald M. "Neutron Dosimetry Using a Panasonic Thermoluminescent Dosimeter." A...steps 8-12. 29-15 THE BUILDING OF THE USAF PANASONIC UD-809AS ALGORITHM Katherine M. Arnold Research Associate Radiation Dosimetry Branch Brooks Air...Research August 1993 30-1 THE BUILDING OF THE USAF PANASONIC UD-809AS ALGORITHM Katherine M. Arnold Research Associate Radiation Dosimetry Branch
NASA Astrophysics Data System (ADS)
Lord, Jesse W.; Boley, A. C.; Durisen, R. H.
2006-12-01
We present a comparison between two three-dimensional radiative hydrodynamics simulations of a gravitationally unstable 0.07 Msun protoplanetary disk around a 0.5 Msun star. The first simulation is the radiatively cooled disk described in Boley et al. (2006, ApJ, 651). This simulation employed an algorithm that uses 3D flux-limited diffusion wherever the vertical Rosseland optical depth is greater than 2/3, which defines the optically thick region. The optically thin atmosphere of the disk, which cools according to its emissivity, is coupled to the optically thick region through an Eddington-like boundary condition. The second simulation employed an algorithm that uses a combination of solving the radiative transfer equation along rays in the z direction and flux limited diffusion in the r and phi directions on a cylindrical grid. We compare the following characteristics of the disk simulations: the mass transport and torques induced by gravitational instabilities, the effective temperature profiles of the disks, the gravitational and Reynolds stresses measured in the disk and those expected in an alpha-disk, and the amplitudes of the Fourier modes. This work has been supported by the National Science Foundation through grant AST-0452975 (astronomy REU program to Indiana University).
Novel medical image enhancement algorithms
NASA Astrophysics Data System (ADS)
Agaian, Sos; McClendon, Stephen A.
2010-01-01
In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.
Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms
Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.
2016-04-12
Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less
Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.
Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less
NASA Astrophysics Data System (ADS)
Zhong, Hualiang; Chetty, Indrin J.
2017-06-01
Tumor regression during the course of fractionated radiotherapy confounds the ability to accurately estimate the total dose delivered to tumor targets. Here we present a new criterion to improve the accuracy of image intensity-based dose mapping operations for adaptive radiotherapy for patients with non-small cell lung cancer (NSCLC). Six NSCLC patients were retrospectively investigated in this study. An image intensity-based B-spline registration algorithm was used for deformable image registration (DIR) of weekly CBCT images to a reference image. The resultant displacement vector fields were employed to map the doses calculated on weekly images to the reference image. The concept of energy conservation was introduced as a criterion to evaluate the accuracy of the dose mapping operations. A finite element method (FEM)-based mechanical model was implemented to improve the performance of the B-Spline-based registration algorithm in regions involving tumor regression. For the six patients, deformed tumor volumes changed by 21.2 ± 15.0% and 4.1 ± 3.7% on average for the B-Spline and the FEM-based registrations performed from fraction 1 to fraction 21, respectively. The energy deposited in the gross tumor volume (GTV) was 0.66 Joules (J) per fraction on average. The energy derived from the fractional dose reconstructed by the B-spline and FEM-based DIR algorithms in the deformed GTV’s was 0.51 J and 0.64 J, respectively. Based on landmark comparisons for the 6 patients, mean error for the FEM-based DIR algorithm was 2.5 ± 1.9 mm. The cross-correlation coefficient between the landmark-measured displacement error and the loss of radiation energy was -0.16 for the FEM-based algorithm. To avoid uncertainties in measuring distorted landmarks, the B-Spline-based registrations were compared to the FEM registrations, and their displacement differences equal 4.2 ± 4.7 mm on average. The displacement differences were correlated to their relative loss of radiation energy with a cross-correlation coefficient equal to 0.68. Based on the principle of energy conservation, the FEM-based mechanical model has a better performance than the B-Spline-based DIR algorithm. It is recommended that the principle of energy conservation be incorporated into a comprehensive QA protocol for adaptive radiotherapy.
Akbar, Shahid; Hayat, Maqsood; Iqbal, Muhammad; Jan, Mian Ahmad
2017-06-01
Cancer is a fatal disease, responsible for one-quarter of all deaths in developed countries. Traditional anticancer therapies such as, chemotherapy and radiation, are highly expensive, susceptible to errors and ineffective techniques. These conventional techniques induce severe side-effects on human cells. Due to perilous impact of cancer, the development of an accurate and highly efficient intelligent computational model is desirable for identification of anticancer peptides. In this paper, evolutionary intelligent genetic algorithm-based ensemble model, 'iACP-GAEnsC', is proposed for the identification of anticancer peptides. In this model, the protein sequences are formulated, using three different discrete feature representation methods, i.e., amphiphilic Pseudo amino acid composition, g-Gap dipeptide composition, and Reduce amino acid alphabet composition. The performance of the extracted feature spaces are investigated separately and then merged to exhibit the significance of hybridization. In addition, the predicted results of individual classifiers are combined together, using optimized genetic algorithm and simple majority technique in order to enhance the true classification rate. It is observed that genetic algorithm-based ensemble classification outperforms than individual classifiers as well as simple majority voting base ensemble. The performance of genetic algorithm-based ensemble classification is highly reported on hybrid feature space, with an accuracy of 96.45%. In comparison to the existing techniques, 'iACP-GAEnsC' model has achieved remarkable improvement in terms of various performance metrics. Based on the simulation results, it is observed that 'iACP-GAEnsC' model might be a leading tool in the field of drug design and proteomics for researchers. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dubovik, O.; Litvinov, P.; Lapyonok, T.; Ducos, F.; Fuertes, D.; Huang, X.; Torres, B.; Aspetsberger, M.; Federspiel, C.
2014-12-01
The POLDER imager on board of the PARASOL micro-satellite is the only satellite polarimeter provided ~ 9 years extensive record of detailed polarmertic observations of Earth atmosphere from space. POLDER / PARASOL registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. Such observations have very high sensitivity to the variability of the properties of atmosphere and underlying surface and can not be adequately interpreted using look-up-table retrieval algorithms developed for analyzing mono-viewing intensity only observations traditionally used in atmospheric remote sensing. Therefore, a new enhanced retrieval algorithm GRASP (Generalized Retrieval of Aerosol and Surface Properties) has been developed and applied for processing of PARASOL data. GRASP relies on highly optimized statistical fitting of observations and derives large number of unknowns for each observed pixel. The algorithm uses elaborated model of the atmosphere and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are implemented during inversion and no look-up tables are used. The algorithm is very flexible in utilization of various types of a priori constraints on the retrieved characteristics and in parameterization of surface - atmosphere system. It is also optimized for high performance calculations. The results of the PARASOL data processing will be presented with the emphasis on the discussion of transferability and adaptability of the developed retrieval concept for processing polarimetric observations of other planets. For example, flexibility and possible alternative in modeling properties of aerosol polydisperse mixtures, particle composition and shape, reflectance of surface, etc. will be discussed.
The Continuous Intercomparison of Radiation Codes (CIRC): Phase I Cases
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Turner, David D.; Miller, Mark A.; Minnis, Patrick; Clough, Shepard; Barker, Howard; Ellingson, Robert
2007-01-01
CIRC aspires to be the successor to ICRCCM (Intercomparison of Radiation Codes in Climate Models). It is envisioned as an evolving and regularly updated reference source for GCM-type radiative transfer (RT) code evaluation with the principle goal to contribute in the improvement of RT parameterizations. CIRC is jointly endorsed by DOE's Atmospheric Radiation Measurement (ARM) program and the GEWEX Radiation Panel (GRP). CIRC's goal is to provide test cases for which GCM RT algorithms should be performing at their best, i.e, well characterized clear-sky and homogeneous, overcast cloudy cases. What distinguishes CIRC from previous intercomparisons is that its pool of cases is based on observed datasets. The bulk of atmospheric and surface input as well as radiative fluxes come from ARM observations as documented in the Broadband Heating Rate Profile (BBHRP) product. BBHRP also provides reference calculations from AER's RRTM RT algorithms that can be used to select the most optimal set of cases and to provide a first-order estimate of our ability to achieve radiative flux closure given the limitations in our knowledge of the atmospheric state.
Algorithm and program for information processing with the filin apparatus
NASA Technical Reports Server (NTRS)
Gurin, L. S.; Morkrov, V. S.; Moskalenko, Y. I.; Tsoy, K. A.
1979-01-01
The reduction of spectral radiation data from space sources is described. The algorithm and program for identifying segments of information obtained from the Film telescope-spectrometer on the Salyut-4 are presented. The information segments represent suspected X-ray sources. The proposed algorithm is an algorithm of the lowest level. Following evaluation, information free of uninformative segments is subject to further processing with algorithms of a higher level. The language used is FORTRAN 4.
A contourlet transform based algorithm for real-time video encoding
NASA Astrophysics Data System (ADS)
Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris
2012-06-01
In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
NASA Technical Reports Server (NTRS)
1979-01-01
Earth and solar radiation budget measurements were examined. Sensor calibration and measurement accuracy were emphasized. Past works on the earth's radiation field that must be used in reducing observations of the radiation field were reviewed. Using a finite difference radiative transfer algorithm, models of the angular and spectral dependence of the earth's radiation field were developed.
Mohamed, Abdallah S. R.; Ruangskul, Manee-Naad; Awan, Musaddiq J.; Baron, Charles A.; Kalpathy-Cramer, Jayashree; Castillo, Richard; Castillo, Edward; Guerrero, Thomas M.; Kocak-Uzel, Esengul; Yang, Jinzhong; Court, Laurence E.; Kantor, Michael E.; Gunn, G. Brandon; Colen, Rivka R.; Frank, Steven J.; Garden, Adam S.; Rosenthal, David I.
2015-01-01
Purpose To develop a quality assurance (QA) workflow by using a robust, curated, manually segmented anatomic region-of-interest (ROI) library as a benchmark for quantitative assessment of different image registration techniques used for head and neck radiation therapy–simulation computed tomography (CT) with diagnostic CT coregistration. Materials and Methods Radiation therapy–simulation CT images and diagnostic CT images in 20 patients with head and neck squamous cell carcinoma treated with curative-intent intensity-modulated radiation therapy between August 2011 and May 2012 were retrospectively retrieved with institutional review board approval. Sixty-eight reference anatomic ROIs with gross tumor and nodal targets were then manually contoured on images from each examination. Diagnostic CT images were registered with simulation CT images rigidly and by using four deformable image registration (DIR) algorithms: atlas based, B-spline, demons, and optical flow. The resultant deformed ROIs were compared with manually contoured reference ROIs by using similarity coefficient metrics (ie, Dice similarity coefficient) and surface distance metrics (ie, 95% maximum Hausdorff distance). The nonparametric Steel test with control was used to compare different DIR algorithms with rigid image registration (RIR) by using the post hoc Wilcoxon signed-rank test for stratified metric comparison. Results A total of 2720 anatomic and 50 tumor and nodal ROIs were delineated. All DIR algorithms showed improved performance over RIR for anatomic and target ROI conformance, as shown for most comparison metrics (Steel test, P < .008 after Bonferroni correction). The performance of different algorithms varied substantially with stratification by specific anatomic structures or category and simulation CT section thickness. Conclusion Development of a formal ROI-based QA workflow for registration assessment demonstrated improved performance with DIR techniques over RIR. After QA, DIR implementation should be the standard for head and neck diagnostic CT and simulation CT allineation, especially for target delineation. © RSNA, 2014 Online supplemental material is available for this article. PMID:25380454
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor)
2007-01-01
A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.
NASA Astrophysics Data System (ADS)
Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.
2004-07-01
The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm.
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-01
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.
Sensitivity of NTCP parameter values against a change of dose calculation algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brink, Carsten; Berg, Martin; Nielsen, Morten
2007-09-15
Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less
NASA Astrophysics Data System (ADS)
Platnick, S.; Wind, G.; Zhang, Z.; Ackerman, S. A.; Maddux, B. C.
2012-12-01
The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the 1.6, 2.1, and 3.7 μm spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "not-clear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud edges (defined by immediate adjacency to "clear" MOD/MYD35 pixels) as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the 1D cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.
GPU-accelerated Monte Carlo convolution/superposition implementation for dose calculation.
Zhou, Bo; Yu, Cedric X; Chen, Danny Z; Hu, X Sharon
2010-11-01
Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution/superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution/superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors' GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. A speedup in the range of 6.7-11.4x is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors' GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article.
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Lindau, R.; Varnai, T.; Simmer, C.
2009-04-01
Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated on such a kriged field. Stochastic modelling aims at reproducing the structure of the data. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. However, while stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. Because radiative transfer through clouds is a highly nonlinear process it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately as well as the correlations in the cloud field because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. However, up to now we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. The algorithm is tested on cloud fields from large eddy simulations (LES). On these clouds a measurement is simulated. From the pseudo-measurement we estimated the distribution and power spectrum. Furthermore, the pseudo-measurement is kriged to a field the size of the final surrogate cloud. The distribution, spectrum and the kriged field are the inputs to the algorithm. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero. We work with four types of pseudo-measurements: one zenith pointing measurement (which together with the wind produces a line measurement), five zenith pointing measurements, a slow and a fast azimuth scan (which together with the wind produce spirals). Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the new surrogate clouds. For comparison also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results already show that these new surrogate clouds reproduce the structure of the original clouds very well and the minima and maxima are located where the pseudo-measurements sees them. The main limitation seems to be the amount of data, which is especially very limited in case of just one zenith pointing measurement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley
2015-01-15
Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps)more » using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration, with the majority of points yielding dose-mapping error <2 Gy (approximately 3% of the total prescribed dose). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, resulting in the smallest errors in mapped dose. Dose differences following registration increased significantly with increasing spatial registration errors, dose, and dose gradient (i.e., SD{sub dose}). This model provides a measurement of the uncertainty in the radiation dose when points are mapped between serial CT scans through deformable registration.« less
Note: thermal imaging enhancement algorithm for gas turbine aerothermal characterization.
Beer, S K; Lawson, S A
2013-08-01
An algorithm was developed to convert radiation intensity images acquired using a black and white CCD camera to thermal images without requiring knowledge of incident background radiation. This unique infrared (IR) thermography method was developed to determine aerothermal characteristics of advanced cooling concepts for gas turbine cooling application. Compared to IR imaging systems traditionally used for gas turbine temperature monitoring, the system developed for the current study is relatively inexpensive and does not require calibration with surface mounted thermocouples.
Computerized tomography platform using beta rays
NASA Astrophysics Data System (ADS)
Paetkau, Owen; Parsons, Zachary; Paetkau, Mark
2017-12-01
A computerized tomography (CT) system using a 0.1 μCi Sr-90 beta source, Geiger counter, and low density foam samples was developed. A simple algorithm was used to construct images from the data collected with the beta CT scanner. The beta CT system is analogous to X-ray CT as both types of radiation are sensitive to density variations. This system offers a platform for learning opportunities in an undergraduate laboratory, covering topics such as image reconstruction algorithms, radiation exposure, and the energy dependence of absorption.
Dudik, Joshua M; Kurosu, Atsuko; Coyle, James L; Sejdić, Ervin
2015-04-01
Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differentiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dudik, Joshua M.; Kurosu, Atsuko; Coyle, James L
2015-01-01
Background Cervical auscultation with high resolution sensors is currently under consideration as a method of automatically screening for specific swallowing abnormalities. To be clinically useful without human involvement, any devices based on cervical auscultation should be able to detect specified swallowing events in an automatic manner. Methods In this paper, we comparatively analyze the density-based spatial clustering of applications with noise algorithm (DBSCAN), a k-means based algorithm, and an algorithm based on quadratic variation as methods of differentiating periods of swallowing activity from periods of time without swallows. These algorithms utilized swallowing vibration data exclusively and compared the results to a gold standard measure of swallowing duration. Data was collected from 23 subjects that were actively suffering from swallowing difficulties. Results Comparing the performance of the DBSCAN algorithm with a proven segmentation algorithm that utilizes k-means clustering demonstrated that the DBSCAN algorithm had a higher sensitivity and correctly segmented more swallows. Comparing its performance with a threshold-based algorithm that utilized the quadratic variation of the signal showed that the DBSCAN algorithm offered no direct increase in performance. However, it offered several other benefits including a faster run time and more consistent performance between patients. All algorithms showed noticeable differen-tiation from the endpoints provided by a videofluoroscopy examination as well as reduced sensitivity. Conclusions In summary, we showed that the DBSCAN algorithm is a viable method for detecting the occurrence of a swallowing event using cervical auscultation signals, but significant work must be done to improve its performance before it can be implemented in an unsupervised manner. PMID:25658505
Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns
2015-03-01
method for base - station antenna radiation patterns. IEEE Antennas Propagation Magazine. 2001;43(2):132. 4. Vasiliadis TG, Dimitriou D, Sergiadis JD...algorithm based on sparse representations of radiation patterns using the inverse Discrete Fourier Transform (DFT) and the inverse Discrete Cosine...patterns using a Model- Based Parameter Estimation (MBPE) technique that reduces the computational time required to model radiation patterns. Another
A brachytherapy photon radiation quality index Q(BT) for probe-type dosimetry.
Quast, Ulrich; Kaulich, Theodor W; Álvarez-Romero, José T; Carlsson Tedgren, Sa; Enger, Shirin A; Medich, David C; Mourtada, Firas; Perez-Calatayud, Jose; Rivard, Mark J; Zakaria, G Abu
2016-06-01
In photon brachytherapy (BT), experimental dosimetry is needed to verify treatment plans if planning algorithms neglect varying attenuation, absorption or scattering conditions. The detector's response is energy dependent, including the detector material to water dose ratio and the intrinsic mechanisms. The local mean photon energy E¯(r) must be known or another equivalent energy quality parameter used. We propose the brachytherapy photon radiation quality indexQ(BT)(E¯), to characterize the photon radiation quality in view of measurements of distributions of the absorbed dose to water, Dw, around BT sources. While the external photon beam radiotherapy (EBRT) radiation quality index Q(EBRT)(E¯)=TPR10(20)(E¯) is not applicable to BT, the authors have applied a novel energy dependent parameter, called brachytherapy photon radiation quality index, defined as Q(BT)(E¯)=Dprim(r=2cm,θ0=90°)/Dprim(r0=1cm,θ0=90°), utilizing precise primary absorbed dose data, Dprim, from source reference databases, without additional MC-calculations. For BT photon sources used clinically, Q(BT)(E¯) enables to determine the effective mean linear attenuation coefficient μ¯(E) and thus the effective energy of the primary photons Eprim(eff)(r0,θ0) at the TG-43 reference position Pref(r0=1cm,θ0=90°), being close to the mean total photon energy E¯tot(r0,θ0). If one has calibrated detectors, published E¯tot(r) and the BT radiation quality correction factor [Formula: see text] for different BT radiation qualities Q and Q0, the detector's response can be determined and Dw(r,θ) measured in the vicinity of BT photon sources. This novel brachytherapy photon radiation quality indexQ(BT) characterizes sufficiently accurate and precise the primary photon's penetration probability and scattering potential. Copyright © 2016. Published by Elsevier Ltd.
Dose algorithm for EXTRAD 4100S extremity dosimeter for use at Sandia National Laboratories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, Charles Augustus
An updated algorithm for the EXTRAD 4100S extremity dosimeter has been derived. This algorithm optimizes the binning of dosimeter element ratios and uses a quadratic function to determine the response factors for low response ratios. This results in lower systematic bias across all test categories and eliminates the need for the 'red strap' algorithm that was used for high energy beta/gamma emitting radionuclides. The Radiation Protection Dosimetry Program (RPDP) at Sandia National Laboratories uses the Thermo Fisher EXTRAD 4100S extremity dosimeter, shown in Fig 1.1 to determine shallow dose to the extremities of potentially exposed individuals. This dosimeter consists ofmore » two LiF TLD elements or 'chipstrates', one of TLD-700 ({sup 7}Li) and one of TLD-100 (natural Li) separated by a tin filter. Following readout and background subtraction, the ratio of the responses of the two elements is determined defining the penetrability of the incident radiation. While this penetrability approximates the incident energy of the radiation, X-rays and beta particles exist in energy distributions that make determination of dose conversion factors less straightforward in their determination.« less
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-07-01
In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.
NASA Astrophysics Data System (ADS)
Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.
2018-04-01
Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.
An intermetallic forming steel under radiation for nuclear applications
NASA Astrophysics Data System (ADS)
Hofer, C.; Stergar, E.; Maloy, S. A.; Wang, Y. Q.; Hosemann, P.
2015-03-01
In this work we investigated the formation and stability of intermetallics formed in a maraging steel PH 13-8 Mo under proton radiation up to 2 dpa utilizing nanoindentation, microcompression testing and atom probe tomography. A comprehensive discussion analyzing the findings utilizing rate theory is introduced, comparing the aging process to radiation induced diffusion. New findings of radiation induced segregation of undersize solute atoms (Si) towards the precipitates are considered.
2009-07-05
proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA
Local reconstruction in computed tomography of diffraction enhanced imaging
NASA Astrophysics Data System (ADS)
Huang, Zhi-Feng; Zhang, Li; Kang, Ke-Jun; Chen, Zhi-Qiang; Zhu, Pei-Ping; Yuan, Qing-Xi; Huang, Wan-Xia
2007-07-01
Computed tomography of diffraction enhanced imaging (DEI-CT) based on synchrotron radiation source has extremely high sensitivity of weakly absorbing low-Z samples in medical and biological fields. The authors propose a modified backprojection filtration(BPF)-type algorithm based on PI-line segments to reconstruct region of interest from truncated refraction-angle projection data in DEI-CT. The distribution of refractive index decrement in the sample can be directly estimated from its reconstruction images, which has been proved by experiments at the Beijing Synchrotron Radiation Facility. The algorithm paves the way for local reconstruction of large-size samples by the use of DEI-CT with small field of view based on synchrotron radiation source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron
Moment-based acceleration via the development of “high-order, low-order” (HO-LO) algorithms has provided substantial accuracy and efficiency enhancements for solutions of the nonlinear, thermal radiative transfer equations by CCS-2 and T-3 staff members. Accuracy enhancements over traditional, linearized methods are obtained by solving a nonlinear, timeimplicit HO-LO system via a Jacobian-free Newton Krylov procedure. This also prevents the appearance of non-physical maximum principle violations (“temperature spikes”) associated with linearization. Efficiency enhancements are obtained in part by removing “effective scattering” from the linearized system. In this highlight, we summarize recent work in which we formally extended the HO-LO radiation algorithm to includemore » operator-split radiation-hydrodynamics.« less
Global Carbon Cycle Modeling in GISS ModelE2 GCM
NASA Astrophysics Data System (ADS)
Aleinov, I. D.; Kiang, N. Y.; Romanou, A.; Romanski, J.
2014-12-01
Consistent and accurate modeling of the Global Carbon Cycle remains one of the main challenges for the Earth System Models. NASA Goddard Institute for Space Studies (GISS) ModelE2 General Circulation Model (GCM) was recently equipped with a complete Global Carbon Cycle algorithm, consisting of three integrated components: Ent Terrestrial Biosphere Model (Ent TBM), Ocean Biogeochemistry Module and atmospheric CO2 tracer. Ent TBM provides CO2 fluxes from the land surface to the atmosphere. Its biophysics utilizes the well-known photosynthesis functions of Farqhuar, von Caemmerer, and Berry and Farqhuar and von Caemmerer, and stomatal conductance of Ball and Berry. Its phenology is based on temperature, drought, and radiation fluxes, and growth is controlled via allocation of carbon from labile carbohydrate reserve storage to different plant components. Soil biogeochemistry is based on the Carnegie-Ames-Stanford (CASA) model of Potter et al. Ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. Atmospheric CO2 is advected with a quadratic upstream algorithm implemented in atmospheric part of ModelE2. Here we present the results for pre-industrial equilibrium and modern transient simulations and provide comparison to available observations. We also discuss the process of validation and tuning of particular algorithms used in the model.
NASA Astrophysics Data System (ADS)
Morgan, Ashraf
The need for an accurate and reliable way for measuring patient dose in multi-row detector computed tomography (MDCT) has increased significantly. This research was focusing on the possibility of measuring CT dose in air to estimate Computed Tomography Dose Index (CTDI) for routine quality control purposes. New elliptic CTDI phantom that better represent human geometry was manufactured for investigating the effect of the subject shape on measured CTDI. Monte Carlo simulation was utilized in order to determine the dose distribution in comparison to the traditional cylindrical CTDI phantom. This research also investigated the effect of Siemens health care newly developed iMAR (iterative metal artifact reduction) algorithm, arthroplasty phantom was designed and manufactured that purpose. The design of new phantoms was part of the research as they mimic the human geometry more than the existing CTDI phantom. The standard CTDI phantom is a right cylinder that does not adequately represent the geometry of the majority of the patient population. Any dose reduction algorithm that is used during patient scan will not be utilized when scanning the CTDI phantom, so a better-designed phantom will allow the use of dose reduction algorithms when measuring dose, which leads to better dose estimation and/or better understanding of dose delivery. Doses from a standard CTDI phantom and the newly-designed phantoms were compared to doses measured in air. Iterative reconstruction is a promising technique in MDCT dose reduction and artifacts correction. Iterative reconstruction algorithms have been developed to address specific imaging tasks as is the case with Iterative Metal Artifact Reduction or iMAR which was developed by Siemens and is to be in use with the companys future computed tomography platform. The goal of iMAR is to reduce metal artifact when imaging patients with metal implants and recover CT number of tissues adjacent to the implant. This research evaluated iMAR capability of recovering CT numbers and reducing noise. Also, the use of iMAR should allow using lower tube voltage instead of 140 KVp which is used frequently to image patients with shoulder implants. The evaluations of image quality and dose reduction were carried out using an arthroplasty phantom.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Redman, Joseph S; Natarajan, Yamini; Hou, Jason K; Wang, Jingqi; Hanif, Muzammil; Feng, Hua; Kramer, Jennifer R; Desiderio, Roxanne; Xu, Hua; El-Serag, Hashem B; Kanwal, Fasiha
2017-10-01
Natural language processing is a powerful technique of machine learning capable of maximizing data extraction from complex electronic medical records. We utilized this technique to develop algorithms capable of "reading" full-text radiology reports to accurately identify the presence of fatty liver disease. Abdominal ultrasound, computerized tomography, and magnetic resonance imaging reports were retrieved from the Veterans Affairs Corporate Data Warehouse from a random national sample of 652 patients. Radiographic fatty liver disease was determined by manual review by two physicians and verified with an expert radiologist. A split validation method was utilized for algorithm development. For all three imaging modalities, the algorithms could identify fatty liver disease with >90% recall and precision, with F-measures >90%. These algorithms could be used to rapidly screen patient records to establish a large cohort to facilitate epidemiological and clinical studies and examine the clinic course and outcomes of patients with radiographic hepatic steatosis.
Reinforcement Learning in Distributed Domains: Beyond Team Games
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Sill, Joseph; Turner, Kagan
2000-01-01
Distributed search algorithms are crucial in dealing with large optimization problems, particularly when a centralized approach is not only impractical but infeasible. Many machine learning concepts have been applied to search algorithms in order to improve their effectiveness. In this article we present an algorithm that blends Reinforcement Learning (RL) and hill climbing directly, by using the RL signal to guide the exploration step of a hill climbing algorithm. We apply this algorithm to the domain of a constellations of communication satellites where the goal is to minimize the loss of importance weighted data. We introduce the concept of 'ghost' traffic, where correctly setting this traffic induces the satellites to act to optimize the world utility. Our results indicated that the bi-utility search introduced in this paper outperforms both traditional hill climbing algorithms and distributed RL approaches such as team games.
A new hybrid meta-heuristic algorithm for optimal design of large-scale dome structures
NASA Astrophysics Data System (ADS)
Kaveh, A.; Ilchi Ghazaan, M.
2018-02-01
In this article a hybrid algorithm based on a vibrating particles system (VPS) algorithm, multi-design variable configuration (Multi-DVC) cascade optimization, and an upper bound strategy (UBS) is presented for global optimization of large-scale dome truss structures. The new algorithm is called MDVC-UVPS in which the VPS algorithm acts as the main engine of the algorithm. The VPS algorithm is one of the most recent multi-agent meta-heuristic algorithms mimicking the mechanisms of damped free vibration of single degree of freedom systems. In order to handle a large number of variables, cascade sizing optimization utilizing a series of DVCs is used. Moreover, the UBS is utilized to reduce the computational time. Various dome truss examples are studied to demonstrate the effectiveness and robustness of the proposed method, as compared to some existing structural optimization techniques. The results indicate that the MDVC-UVPS technique is a powerful search and optimization method for optimizing structural engineering problems.
NASA Astrophysics Data System (ADS)
Plante, Ianik; Devroye, Luc
2015-09-01
Several computer codes simulating chemical reactions in particles systems are based on the Green's functions of the diffusion equation (GFDE). Indeed, many types of chemical systems have been simulated using the exact GFDE, which has also become the gold standard for validating other theoretical models. In this work, a simulation algorithm is presented to sample the interparticle distance for partially diffusion-controlled reversible ABCD reaction. This algorithm is considered exact for 2-particles systems, is faster than conventional look-up tables and uses only a few kilobytes of memory. The simulation results obtained with this method are compared with those obtained with the independent reaction times (IRT) method. This work is part of our effort in developing models to understand the role of chemical reactions in the radiation effects on cells and tissues and may eventually be included in event-based models of space radiation risks. However, as many reactions are of this type in biological systems, this algorithm might play a pivotal role in future simulation programs not only in radiation chemistry, but also in the simulation of biochemical networks in time and space as well.
Noël, Peter B; Engels, Stephan; Köhler, Thomas; Muenzel, Daniela; Franz, Daniela; Rasper, Michael; Rummeny, Ernst J; Dobritz, Martin; Fingerle, Alexander A
2018-01-01
Background The explosive growth of computer tomography (CT) has led to a growing public health concern about patient and population radiation dose. A recently introduced technique for dose reduction, which can be combined with tube-current modulation, over-beam reduction, and organ-specific dose reduction, is iterative reconstruction (IR). Purpose To evaluate the quality, at different radiation dose levels, of three reconstruction algorithms for diagnostics of patients with proven liver metastases under tumor follow-up. Material and Methods A total of 40 thorax-abdomen-pelvis CT examinations acquired from 20 patients in a tumor follow-up were included. All patients were imaged using the standard-dose and a specific low-dose CT protocol. Reconstructed slices were generated by using three different reconstruction algorithms: a classical filtered back projection (FBP); a first-generation iterative noise-reduction algorithm (iDose4); and a next generation model-based IR algorithm (IMR). Results The overall detection of liver lesions tended to be higher with the IMR algorithm than with FBP or iDose4. The IMR dataset at standard dose yielded the highest overall detectability, while the low-dose FBP dataset showed the lowest detectability. For the low-dose protocols, a significantly improved detectability of the liver lesion can be reported compared to FBP or iDose 4 ( P = 0.01). The radiation dose decreased by an approximate factor of 5 between the standard-dose and the low-dose protocol. Conclusion The latest generation of IR algorithms significantly improved the diagnostic image quality and provided virtually noise-free images for ultra-low-dose CT imaging.
Predicting impact of multi-paths on phase change in map-based vehicular ad hoc networks
NASA Astrophysics Data System (ADS)
Rahmes, Mark; Lemieux, George; Sonnenberg, Jerome; Chester, David B.
2014-05-01
Dynamic Spectrum Access, which through its ability to adapt the operating frequency of a radio, is widely believed to be a solution to the limited spectrum problem. Mobile Ad Hoc Networks (MANETs) can extend high capacity mobile communications over large areas where fixed and tethered-mobile systems are not available. In one use case with high potential impact cognitive radio employs spectrum sensing to facilitate identification of allocated frequencies not currently accessed by their primary users. Primary users own the rights to radiate at a specific frequency and geographic location, secondary users opportunistically attempt to radiate at a specific frequency when the primary user is not using it. We quantify optimal signal detection in map based cognitive radio networks with multiple rapidly varying phase changes and multiple orthogonal signals. Doppler shift occurs due to reflection, scattering, and rapid vehicle movement. Path propagation as well as vehicle movement produces either constructive or destructive interference with the incident wave. Our signal detection algorithms can assist the Doppler spread compensation algorithm by deciding how many phase changes in signals are present in a selected band of interest. Additionally we can populate a spatial radio environment map (REM) database with known information that can be leveraged in an ad hoc network to facilitate Dynamic Spectrum Access. We show how topography can help predict the impact of multi-paths on phase change, as well as about the prediction from dense traffic areas. Utilization of high resolution geospatial data layers in RF propagation analysis is directly applicable.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-02-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.
Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang
2012-01-01
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399
Satellite change detection of forest damage near the Chernobyl accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClellan, G.E.; Anno, G.H.
1992-01-01
A substantial amount of forest within a few kilometers of the Chernobyl nuclear reactor station was badly contaminated with radionuclides by the April 26, 1986, explosion and ensuing fire at reactor No. 4. Radiation doses to conifers in some areas were sufficient to cause discoloration of needles within a few weeks. Other areas, receiving smaller doses, showed foliage changes beginning 6 months to a year later. Multispectral imagery available from Landsat sensors is especially suited for monitoring such changes in vegetation. A series of Landsat Thematic Mapper images was developed that span the 2 yr following the accident. Quantitative dosemore » estimation for the exposed conifers requires an objective change detection algorithm and knowledge of the dose-time response of conifers to ionizing radiation. Pacific-Sierra Research Corporation's Hyperscout{trademark} algorithm is based on an advanced, sensitive technique for change detection particularly suited for multispectral images. The Hyperscout algorithm has been used to assess radiation damage to the forested areas around the Chernobyl nuclear power plant.« less
“SLIMPLECTIC” INTEGRATORS: VARIATIONAL INTEGRATORS FOR GENERAL NONCONSERVATIVE SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsang, David; Turner, Alec; Galley, Chad R.
2015-08-10
Symplectic integrators are widely used for long-term integration of conservative astrophysical problems due to their ability to preserve the constants of motion; however, they cannot in general be applied in the presence of nonconservative interactions. In this Letter, we develop the “slimplectic” integrator, a new type of numerical integrator that shares many of the benefits of traditional symplectic integrators yet is applicable to general nonconservative systems. We utilize a fixed-time-step variational integrator formalism applied to the principle of stationary nonconservative action developed in Galley et al. As a result, the generalized momenta and energy (Noether current) evolutions are well-tracked. Wemore » discuss several example systems, including damped harmonic oscillators, Poynting–Robertson drag, and gravitational radiation reaction, by utilizing our new publicly available code to demonstrate the slimplectic integrator algorithm. Slimplectic integrators are well-suited for integrations of systems where nonconservative effects play an important role in the long-term dynamical evolution. As such they are particularly appropriate for cosmological or celestial N-body dynamics problems where nonconservative interactions, e.g., gas interactions or dissipative tides, can play an important role.« less
Hybrid Active-Passive Systems for Control of Aircraft Interior Noise
NASA Technical Reports Server (NTRS)
Fuller, Chris R.
1999-01-01
Previous work has demonstrated the large potential for hybrid active-passive systems for attenuating interior noise in aircraft fuselages. The main advantage of an active-passive system is, by utilizing the natural dynamics of the actuator system, the control actuator power and weight is markedly reduced and stability/robustness is enhanced. Three different active-passive approaches were studied in the past year. The first technique utilizes multiple tunable vibration absorbers (ATVA) for reducing narrow band sound radiated from panels and transmitted through fuselage structures. The focus is on reducing interior noise due to propeller or turbo fan harmonic excitation. Two types of tunable vibration absorbers were investigated; a solid state system based upon a piezoelectric mechanical exciter and an electromechanical system based upon a Motran shaker. Both of these systems utilize a mass-spring dynamic effect to maximize tile output force near resonance of the shaker system and so can also be used as vibration absorbers. The dynamic properties of the absorbers (i.e. resonance frequency) were modified using a feedback signal from an accelerometer mounted on the active mass, passed through a compensator and fed into the drive component of the shaker system (piezoelectric element or voice coil respectively). The feedback loop consisted of a two coefficient FIR filter, implemented on a DSP, where the input is acceleration of tile ATVA mass and the output is a force acting in parallel with the stiffness of the absorber. By separating the feedback signal into real and imaginary components, the effective natural frequency and damping of the ATVA can be altered independently. This approach gave control of the resonance frequencies while also allowing the simultaneous removal of damping from the ATVA, thus increasing the ease of controllability and effectiveness. In order to obtain a "tuned" vibration absorber the chosen resonant frequency was set to the excitation frequency. In order to minimize sound radiation a gradient descent algorithm was developed which globally adapted the resonance frequencies of multiple ATVA's while minimizing a cost based upon the radiated sound power or sound energy obtained from an array of microphones.
Digitally Controlled Slot Coupled Patch Array
NASA Technical Reports Server (NTRS)
D'Arista, Thomas; Pauly, Jerry
2010-01-01
A four-element array conformed to a singly curved conducting surface has been demonstrated to provide 2 dB axial ratio of 14 percent, while maintaining VSWR (voltage standing wave ratio) of 2:1 and gain of 13 dBiC. The array is digitally controlled and can be scanned with the LMS Adaptive Algorithm using the power spectrum as the objective, as well as the Direction of Arrival (DoA) of the beam to set the amplitude of the power spectrum. The total height of the array above the conducting surface is 1.5 inches (3.8 cm). A uniquely configured microstrip-coupled aperture over a conducting surface produced supergain characteristics, achieving 12.5 dBiC across the 2-to-2.13- GHz and 2.2-to-2.3-GHz frequency bands. This design is optimized to retain VSWR and axial ratio across the band as well. The four elements are uniquely configured with respect to one another for performance enhancement, and the appropriate phase excitation to each element for scan can be found either by analytical beam synthesis using the genetic algorithm with the measured or simulated far field radiation pattern, or an adaptive algorithm implemented with the digitized signal. The commercially available tuners and field-programmable gate array (FPGA) boards utilized required precise phase coherent configuration control, and with custom code developed by Nokomis, Inc., were shown to be fully functional in a two-channel configuration controlled by FPGA boards. A four-channel tuner configuration and oscilloscope configuration were also demonstrated although algorithm post-processing was required.
Facilitating Follow-up of LIGO–Virgo Events Using Rapid Sky Localization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsin-Yu; Holz, Daniel E.
We discuss an algorithm for accurate and very low-latency (<1 s) localization of gravitational-wave (GW) sources using only the relative times of arrival, relative phases, and relative signal-to-noise ratios for pairs of detectors. The algorithm is independent of distances and masses to leading order, and can be generalized to all discrete (as opposed to stochastic and continuous) sources detected by ground-based detector networks. Our approach is similar to that of BAYESTAR with a few modifications, which result in increased computational efficiency. For the LIGO two-detector configuration (Hanford+Livingston) operating in O1 we find a median 50% (90%) localization of 143 deg{supmore » 2} (558 deg{sup 2}) for binary neutron stars. We use our algorithm to explore the improvement in localization resulting from loud events, finding that the loudest out of the first 4 (or 10) events reduces the median sky-localization area by a factor of 1.9 (3.0) for the case of two GW detectors, and 2.2 (4.0) for three detectors. We also consider the case of multi-messenger joint detections in both the gravitational and the electromagnetic radiation, and show that joint localization can offer significant improvements (e.g., in the case of LIGO and Fermi /GBM joint detections). We show that a prior on the binary inclination, potentially arising from GRB observations, has a negligible effect on GW localization. Our algorithm is simple, fast, and accurate, and may be of particular utility in the development of multi-messenger astronomy.« less
NASA Astrophysics Data System (ADS)
Chen, Xueli; Liang, Jimin; Hu, Hao; Qu, Xiaochao; Yang, Defu; Chen, Duofang; Zhu, Shouping; Tian, Jie
2012-03-01
Gastric cancer is the second cause of cancer-related death in the world, and it remains difficult to cure because it has been in late-stage once that is found. Early gastric cancer detection becomes an effective approach to decrease the gastric cancer mortality. Bioluminescence tomography (BLT) has been applied to detect early liver cancer and prostate cancer metastasis. However, the gastric cancer commonly originates from the gastric mucosa and grows outwards. The bioluminescent light will pass through a non-scattering region constructed by gastric pouch when it transports in tissues. Thus, the current BLT reconstruction algorithms based on the approximation model of radiative transfer equation are not optimal to handle this problem. To address the gastric cancer specific problem, this paper presents a novel reconstruction algorithm that uses a hybrid light transport model to describe the bioluminescent light propagation in tissues. The radiosity theory integrated with the diffusion equation to form the hybrid light transport model is utilized to describe light propagation in the non-scattering region. After the finite element discretization, the hybrid light transport model is converted into a minimization problem which fuses an l1 norm based regularization term to reveal the sparsity of bioluminescent source distribution. The performance of the reconstruction algorithm is first demonstrated with a digital mouse based simulation with the reconstruction error less than 1mm. An in situ gastric cancer-bearing nude mouse based experiment is then conducted. The primary result reveals the ability of the novel BLT reconstruction algorithm in early gastric cancer detection.
3D Cloud Field Prediction using A-Train Data and Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Johnson, C. L.
2017-12-01
Validation of cloud process parameterizations used in global climate models (GCMs) would greatly benefit from observed 3D cloud fields at the size comparable to that of a GCM grid cell. For the highest resolution simulations, surface grid cells are on the order of 100 km by 100 km. CloudSat/CALIPSO data provides 1 km width of detailed vertical cloud fraction profile (CFP) and liquid and ice water content (LWC/IWC). This work utilizes four machine learning algorithms to create nonlinear regressions of CFP, LWC, and IWC data using radiances, surface type and location of measurement as predictors and applies the regression equations to off-track locations generating 3D cloud fields for 100 km by 100 km domains. The CERES-CloudSat-CALIPSO-MODIS (C3M) merged data set for February 2007 is used. Support Vector Machines, Artificial Neural Networks, Gaussian Processes and Decision Trees are trained on 1000 km of continuous C3M data. Accuracy is computed using existing vertical profiles that are excluded from the training data and occur within 100 km of the training data. Accuracy of the four algorithms is compared. Average accuracy for one day of predicted data is 86% for the most successful algorithm. The methodology for training the algorithms, determining valid prediction regions and applying the equations off-track is discussed. Predicted 3D cloud fields are provided as inputs to the Ed4 NASA LaRC Fu-Liou radiative transfer code and resulting TOA radiances compared to observed CERES/MODIS radiances. Differences in computed radiances using predicted profiles and observed radiances are compared.
Bio-Optical Measurement and Modeling of the California Current and Polar Oceans. Chapter 13
NASA Technical Reports Server (NTRS)
Mitchell, B. Greg
2001-01-01
This Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) project contract supports in situ ocean optical observations in the California Current, Southern Ocean, Indian Ocean as well as merger of other in situ data sets we have collected on various global cruises supported by separate grants or contracts. The principal goals of our research are to validate standard or experimental products through detailed bio-optical and biogeochemical measurements, and to combine ocean optical observations with advanced radiative transfer modeling to contribute to satellite vicarious radiometric calibration and advanced algorithm development. In collaboration with major oceanographic ship-based observation programs funded by various agencies (CalCOFI, US JGOFS, NOAA AMLR, INDOEX and Japan/East Sea) our SIMBIOS effort has resulted in data from diverse bio-optical provinces. For these global deployments we generate a high-quality, methodologically consistent, data set encompassing a wide-range of oceanic conditions. Global data collected in recent years have been integrated with our on-going CalCOFI database and have been used to evaluate Sea-Viewing Wide Field-of-view Sensor (SeaWiFS) algorithms and to carry out validation studies. The combined database we have assembled now comprises more than 700 stations and includes observations for the clearest oligotrophic waters, highly eutrophic blooms, red-tides and coastal case two conditions. The data has been used to validate water-leaving radiance estimated with SeaWiFS as well as bio optical algorithms for chlorophyll pigments. The comprehensive data is utilized for development of experimental algorithms (e.g., high-low latitude pigment transition, phytoplankton absorption, and cDOM).
Dynamic phasing of multichannel cw laser radiation by means of a stochastic gradient algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, V A; Volkov, M V; Garanin, S G
2013-09-30
The phasing of a multichannel laser beam by means of an iterative stochastic parallel gradient (SPG) algorithm has been numerically and experimentally investigated. The operation of the SPG algorithm is simulated, the acceptable range of amplitudes of probe phase shifts is found, and the algorithm parameters at which the desired Strehl number can be obtained with a minimum number of iterations are determined. An experimental bench with phase modulators based on lithium niobate, which are controlled by a multichannel electronic unit with a real-time microcontroller, has been designed. Phasing of 16 cw laser beams at a system response bandwidth ofmore » 3.7 kHz and phase thermal distortions in a frequency band of about 10 Hz is experimentally demonstrated. The experimental data are in complete agreement with the calculation results. (control of laser radiation parameters)« less
Using a derivative-free optimization method for multiple solutions of inverse transport problems
Armstrong, Jerawan C.; Favorite, Jeffrey A.
2016-01-14
Identifying unknown components of an object that emits radiation is an important problem for national and global security. Radiation signatures measured from an object of interest can be used to infer object parameter values that are not known. This problem is called an inverse transport problem. An inverse transport problem may have multiple solutions and the most widely used approach for its solution is an iterative optimization method. This paper proposes a stochastic derivative-free global optimization algorithm to find multiple solutions of inverse transport problems. The algorithm is an extension of a multilevel single linkage (MLSL) method where a meshmore » adaptive direct search (MADS) algorithm is incorporated into the local phase. Furthermore, numerical test cases using uncollided fluxes of discrete gamma-ray lines are presented to show the performance of this new algorithm.« less
A technique for global monitoring of net solar irradiance at the ocean surface. I - Model
NASA Technical Reports Server (NTRS)
Frouin, Robert; Chertock, Beth
1992-01-01
An accurate long-term (84-month) climatology of net surface solar irradiance over the global oceans from Nimbus-7 earth radiation budget (ERB) wide-field-of-view planetary-albedo data is generated via an algorithm based on radiative transfer theory. Net surface solar irradiance is computed as the difference between the top-of-atmosphere incident solar irradiance (known) and the sum of the solar irradiance reflected back to space by the earth-atmosphere system (observed) and the solar irradiance absorbed by atmospheric constituents (modeled). It is shown that the effects of clouds and clear-atmosphere constituents can be decoupled on a monthly time scale, which makes it possible to directly apply the algorithm with monthly averages of ERB planetary-albedo data. Compared theoretically with the algorithm of Gautier et al. (1980), the present algorithm yields higher solar irradiance values in clear and thin cloud conditions and lower values in thick cloud conditions.
Radiation-MHD Simulations of Pillars and Globules in HII Regions
NASA Astrophysics Data System (ADS)
Mackey, J.
2012-07-01
Implicit and explicit raytracing-photoionisation algorithms have been implemented in the author's radiation-magnetohydrodynamics code. The algorithms are described briefly and their efficiency and parallel scaling are investigated. The implicit algorithm is more efficient for calculations where ionisation fronts have very supersonic velocities, and the explicit algorithm is favoured in the opposite limit because of its better parallel scaling. The implicit method is used to investigate the effects of initially uniform magnetic fields on the formation and evolution of dense pillars and cometary globules at the boundaries of HII regions. It is shown that for weak and medium field strengths an initially perpendicular field is swept into alignment with the pillar during its dynamical evolution, matching magnetic field observations of the ‘Pillars of Creation’ in M16. A strong perpendicular magnetic field remains in its initial configuration and also confines the photoevaporation flow into a bar-shaped, dense, ionised ribbon which partially shields the ionisation front.
NASA Astrophysics Data System (ADS)
Medgyesi-Mitschang, L. N.; Putnam, J. M.
1980-04-01
A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation and scattering from finite-length open cylinders of arbitrary cross section as well as the near fields and aperture-coupled fields for rectangular apertures on such bodies. The theoretical development underlying the algorithm is described in Volume 1. The structure of the computer algorithm is such that no a priori knowledge of the method of moments technique or detailed FORTRAN experience are presupposed for the user. A set of carefully drawn example problems illustrates all the options of the algorithm. For more detailed understanding of the workings of the codes, special cross referencing to the equations in Volume 1 is provided. For additional clarity, comment statements are liberally interspersed in the code listings, summarized in the present volume.
A point kernel algorithm for microbeam radiation therapy
NASA Astrophysics Data System (ADS)
Debus, Charlotte; Oelfke, Uwe; Bartzsch, Stefan
2017-11-01
Microbeam radiation therapy (MRT) is a treatment approach in radiation therapy where the treatment field is spatially fractionated into arrays of a few tens of micrometre wide planar beams of unusually high peak doses separated by low dose regions of several hundred micrometre width. In preclinical studies, this treatment approach has proven to spare normal tissue more effectively than conventional radiation therapy, while being equally efficient in tumour control. So far dose calculations in MRT, a prerequisite for future clinical applications are based on Monte Carlo simulations. However, they are computationally expensive, since scoring volumes have to be small. In this article a kernel based dose calculation algorithm is presented that splits the calculation into photon and electron mediated energy transport, and performs the calculation of peak and valley doses in typical MRT treatment fields within a few minutes. Kernels are analytically calculated depending on the energy spectrum and material composition. In various homogeneous materials peak, valley doses and microbeam profiles are calculated and compared to Monte Carlo simulations. For a microbeam exposure of an anthropomorphic head phantom calculated dose values are compared to measurements and Monte Carlo calculations. Except for regions close to material interfaces calculated peak dose values match Monte Carlo results within 4% and valley dose values within 8% deviation. No significant differences are observed between profiles calculated by the kernel algorithm and Monte Carlo simulations. Measurements in the head phantom agree within 4% in the peak and within 10% in the valley region. The presented algorithm is attached to the treatment planning platform VIRTUOS. It was and is used for dose calculations in preclinical and pet-clinical trials at the biomedical beamline ID17 of the European synchrotron radiation facility in Grenoble, France.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
Two new algorithms to combine kriging with stochastic modelling
NASA Astrophysics Data System (ADS)
Venema, Victor; Lindau, Ralf; Varnai, Tamas; Simmer, Clemens
2010-05-01
Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated driven by such a kriged field. Stochastic modelling aims at reproducing the statistical structure of the data in space and time. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. While stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. This requires the use of so-called constrained stochastic models. Because radiative transfer through clouds is a highly nonlinear process, it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately. In addition, the correlations within the cloud field are important, especially because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. Up to now, however, we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero during successive iterations. A second algorithm, which we call step-wise kriging, pursues the same aim. Each time the kriging algorithm estimates a value, noise is added to it, after which this new point is accounted for in the estimation of all the later points. In this way, the autocorrelation of the step-krigged field is close to that found in the pseudo measurements. The amount of noise is determined by the kriging uncertainty. The algorithms are tested on cloud fields from large eddy simulations (LES). On these clouds, a measurement is simulated. From these pseudo-measurements, we estimated the power spectrum for the surrogates, the semi-variogram for the (stepwise) kriging and the distribution. Furthermore, the pseudo-measurement is kriged. Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the two new types of stochastic clouds. For comparison, also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results show that both algorithms reproduce the structure of the original clouds well, and the minima and maxima are located where the pseudo-measurements see them. The main problem for the quality of the structure and the root mean square error is the amount of data, which is especially very limited in case of just one zenith pointing measurement.
Hotplate precipitation gauge calibrations and field measurements
NASA Astrophysics Data System (ADS)
Zelasko, Nicholas; Wettlaufer, Adam; Borkhuu, Bujidmaa; Burkhart, Matthew; Campbell, Leah S.; Steenburgh, W. James; Snider, Jefferson R.
2018-01-01
First introduced in 2003, approximately 70 Yankee Environmental Systems (YES) hotplate precipitation gauges have been purchased by researchers and operational meteorologists. A version of the YES hotplate is described in Rasmussen et al. (2011; R11). Presented here is testing of a newer version of the hotplate; this device is equipped with longwave and shortwave radiation sensors. Hotplate surface temperature, coefficients describing natural and forced convective sensible energy transfer, and radiative properties (longwave emissivity and shortwave reflectance) are reported for two of the new-version YES hotplates. These parameters are applied in a new algorithm and are used to derive liquid-equivalent accumulations (snowfall and rainfall), and these accumulations are compared to values derived by the internal algorithm used in the YES hotplates (hotplate-derived accumulations). In contrast with R11, the new algorithm accounts for radiative terms in a hotplate's energy budget, applies an energy conversion factor which does not differ from a theoretical energy conversion factor, and applies a surface area that is correct for the YES hotplate. Radiative effects are shown to be relatively unimportant for the precipitation events analyzed. In addition, this work documents a 10 % difference between the hotplate-derived and new-algorithm-derived accumulations. This difference seems consistent with R11's application of a hotplate surface area that deviates from the actual surface area of the YES hotplate and with R11's recommendation for an energy conversion factor that differs from that calculated using thermodynamic theory.
Improving Cancer Detection and Dose Efficiency in Dedicated Breast Cancer CT
2010-02-01
source trajectory and data truncation, which can however be solved with the back-projection filtration ( BPF ) algorithm [6,7]. I have used the BPF ...high to low radiation dose levels. I have investigated noise properties in images reconstructed by use of FDK and BPF algorithms at different noise...analytic algorithms such as the FDK and BPF algorithms are applied to sparse-view data, the reconstruction images will contain artifacts such as streak
Lloyd, Andrew; Kerr, Cicely; Breheny, Katie; Brazier, John; Ortiz, Aurora; Borg, Emma
2014-03-01
Condition-specific preference-based measures can offer utility data where they would not otherwise be available or where generic measures may lack sensitivity, although they lack comparability across conditions. This study aimed to develop an algorithm for estimating utilities from the short bowel syndrome health-related quality of life scale (SBS-QoL™). SBS-QoL™ items were selected based on factor and item performance analysis of a European SBS-QoL™ dataset and consultation with 3 SBS clinical experts. Six-dimension health states were developed using 8 SBS-QoL™ items (2 dimensions combined 2 SBS-QoL™ items). SBS health states were valued by a UK general population sample (N = 250) using the lead-time time trade-off method. Preference weights or 'utility decrements' for each severity level of each dimension were estimated by regression models and used to develop the scoring algorithm. Mean utilities for the SBS health states ranged from -0.46 (worst health state, very much affected on all dimensions) to 0.92 (best health state, not at all affected on all dimensions). The random effects model with maximum likelihood estimation regression had the best predictive ability and lowest root mean squared error and mean absolute error, and was used to develop the scoring algorithm. The preference-weighted scoring algorithm for the SBS-QoL™ developed is able to estimate a wide range of utility values from patient-level SBS-QoL™ data. This allows estimation of SBS HRQL impact for the purpose of economic evaluation of SBS treatment benefits.
Accelerated gradient-based free form deformable registration for online adaptive radiotherapy
NASA Astrophysics Data System (ADS)
Yu, Gang; Liang, Yueqiang; Yang, Guanyu; Shu, Huazhong; Li, Baosheng; Yin, Yong; Li, Dengwang
2015-04-01
The registration of planning fan-beam computed tomography (FBCT) and daily cone-beam CT (CBCT) is a crucial step in adaptive radiation therapy. The current intensity-based registration algorithms, such as Demons, may fail when they are used to register FBCT and CBCT, because the CT numbers in CBCT cannot exactly correspond to the electron densities. In this paper, we investigated the effects of CBCT intensity inaccuracy on the registration accuracy and developed an accurate gradient-based free form deformation algorithm (GFFD). GFFD distinguishes itself from other free form deformable registration algorithms by (a) measuring the similarity using the 3D gradient vector fields to avoid the effect of inconsistent intensities between the two modalities; (b) accommodating image sampling anisotropy using the local polynomial approximation-intersection of confidence intervals (LPA-ICI) algorithm to ensure a smooth and continuous displacement field; and (c) introducing a ‘bi-directional’ force along with an adaptive force strength adjustment to accelerate the convergence process. It is expected that such a strategy can decrease the effect of the inconsistent intensities between the two modalities, thus improving the registration accuracy and robustness. Moreover, for clinical application, the algorithm was implemented by graphics processing units (GPU) through OpenCL framework. The registration time of the GFFD algorithm for each set of CT data ranges from 8 to 13 s. The applications of on-line adaptive image-guided radiation therapy, including auto-propagation of contours, aperture-optimization and dose volume histogram (DVH) in the course of radiation therapy were also studied by in-house-developed software.
A BPF-FBP tandem algorithm for image reconstruction in reverse helical cone-beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Seungryong; Xia, Dan; Pellizzari, Charles A.
2010-01-15
Purpose: Reverse helical cone-beam computed tomography (CBCT) is a scanning configuration for potential applications in image-guided radiation therapy in which an accurate anatomic image of the patient is needed for image-guidance procedures. The authors previously developed an algorithm for image reconstruction from nontruncated data of an object that is completely within the reverse helix. The purpose of this work is to develop an image reconstruction approach for reverse helical CBCT of a long object that extends out of the reverse helix and therefore constitutes data truncation. Methods: The proposed approach comprises of two reconstruction steps. In the first step, amore » chord-based backprojection-filtration (BPF) algorithm reconstructs a volumetric image of an object from the original cone-beam data. Because there exists a chordless region in the middle of the reverse helix, the image obtained in the first step contains an unreconstructed central-gap region. In the second step, the gap region is reconstructed by use of a Pack-Noo-formula-based filteredbackprojection (FBP) algorithm from the modified cone-beam data obtained by subtracting from the original cone-beam data the reprojection of the image reconstructed in the first step. Results: The authors have performed numerical studies to validate the proposed approach in image reconstruction from reverse helical cone-beam data. The results confirm that the proposed approach can reconstruct accurate images of a long object without suffering from data-truncation artifacts or cone-angle artifacts. Conclusions: They developed and validated a BPF-FBP tandem algorithm to reconstruct images of a long object from reverse helical cone-beam data. The chord-based BPF algorithm was utilized for converting the long-object problem into a short-object problem. The proposed approach is applicable to other scanning configurations such as reduced circular sinusoidal trajectories.« less
Verification of IEEE Compliant Subtractive Division Algorithms
NASA Technical Reports Server (NTRS)
Miner, Paul S.; Leathrum, James F., Jr.
1996-01-01
A parameterized definition of subtractive floating point division algorithms is presented and verified using PVS. The general algorithm is proven to satisfy a formal definition of an IEEE standard for floating point arithmetic. The utility of the general specification is illustrated using a number of different instances of the general algorithm.
Using Physical Models to Explain a Division Algorithm.
ERIC Educational Resources Information Center
Vest, Floyd
1985-01-01
Develops a division algorithm in terms of familiar manipulations of concrete objects and presents it with a series of questions for diagnosis of students' understanding of the algorithm in terms of the concrete model utilized. Also offers general guidelines for using concrete illustrations to explain algorithms and other mathematical principles.…
Image Information Mining Utilizing Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai
2002-01-01
The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.
F--Ray: A new algorithm for efficient transport of ionizing radiation
NASA Astrophysics Data System (ADS)
Mao, Yi; Zhang, J.; Wandelt, B. D.; Shapiro, P. R.; Iliev, I. T.
2014-04-01
We present a new algorithm for the 3D transport of ionizing radiation, called F
NASA Astrophysics Data System (ADS)
Moreno, H. A.; Ogden, F. L.; Alvarez, L. V.
2016-12-01
This research work presents a methodology for estimating terrain slope degree, aspect (slope orientation) and total incoming solar radiation from Triangular Irregular Network (TIN) terrain models. The algorithm accounts for self shading and cast shadows, sky view fractions for diffuse radiation, remote albedo and atmospheric backscattering, by using a vectorial approach within a topocentric coordinate system and establishing geometric relations between groups of TIN elements and the sun position. A normal vector to the surface of each TIN element describes slope and aspect while spherical trigonometry allows computingunit vector defining the position of the sun at each hour and day of the year. Thus, a dot product determines the radiation flux at each TIN element. Cast shadows are computed by scanning the projection of groups of TIN elements in the direction of the closest perpendicular plane to the sun vector only in the visible horizon range. Sky view fractions are computed by a simplified scanning algorithm from the highest to the lowest triangles along prescribed directions and visible distances, useful to determine diffuse radiation. Finally, remotealbedo is computed from the sky view fraction complementary functions for prescribed albedo values of the surrounding terrain only for significant angles above the horizon. The sensitivity of the different radiative components is tested a in a moutainuous watershed in Wyoming, to seasonal changes in weather and surrounding albedo (snow). This methodology represents an improvement on the current algorithms to compute terrain and radiation values on triangular-based models in an accurate and efficient manner. All terrain-related features (e.g. slope, aspect, sky view fraction) can be pre-computed and stored for easy access for a subsequent, progressive-in-time, numerical simulation.
Modeling Urban Scenarios & Experiments: Fort Indiantown Gap Data Collections Summary and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archer, Daniel E.; Bandstra, Mark S.; Davidson, Gregory G.
This report summarizes experimental radiation detector, contextual sensor, weather, and global positioning system (GPS) data collected to inform and validate a comprehensive, operational radiation transport modeling framework to evaluate radiation detector system and algorithm performance. This framework will be used to study the influence of systematic effects (such as geometry, background activity, background variability, environmental shielding, etc.) on detector responses and algorithm performance using synthetic time series data. This work consists of performing data collection campaigns at a canonical, controlled environment for complete radiological characterization to help construct and benchmark a high-fidelity model with quantified system geometries, detector response functions,more » and source terms for background and threat objects. This data also provides an archival, benchmark dataset that can be used by the radiation detection community. The data reported here spans four data collection campaigns conducted between May 2015 and September 2016.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2015-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer
NASA Technical Reports Server (NTRS)
Godoy, William F.; Liu, Xu
2011-01-01
General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.
NASA Astrophysics Data System (ADS)
Zhevnerchuk, D. V.; Surkova, A. S.; Lomakina, L. S.; Golubev, A. S.
2018-05-01
The article describes the component representation approach and semantic models of on-board electronics protection from ionizing radiation of various nature. Semantic models are constructed, the feature of which is the representation of electronic elements, protection modules, sources of impact in the form of blocks with interfaces. The rules of logical inference and algorithms for synthesizing the object properties of the semantic network, imitating the interface between the components of the protection system and the sources of radiation, are developed. The results of the algorithm are considered using the example of radiation-resistant microcircuits 1645RU5U, 1645RT2U and the calculation and experimental method for estimating the durability of on-board electronics.
Three-dimensional monochromatic x-ray computed tomography using synchrotron radiation
NASA Astrophysics Data System (ADS)
Saito, Tsuneo; Kudo, Hiroyuki; Takeda, Tohoru; Itai, Yuji; Tokumori, Kenji; Toyofuku, Fukai; Hyodo, Kazuyuki; Ando, Masami; Nishimura, Katsuyuki; Uyama, Chikao
1998-08-01
We describe a technique of 3D computed tomography (3D CT) using monochromatic x rays generated by synchrotron radiation, which performs a direct reconstruction of a 3D volume image of an object from its cone-beam projections. For the development, we propose a practical scanning orbit of the x-ray source to obtain complete 3D information on an object, and its corresponding 3D image reconstruction algorithm. The validity and usefulness of the proposed scanning orbit and reconstruction algorithm were confirmed by computer simulation studies. Based on these investigations, we have developed a prototype 3D monochromatic x-ray CT using synchrotron radiation, which provides exact 3D reconstruction and material-selective imaging by using the K-edge energy subtraction technique.
Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M
2017-12-06
While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.
Utility-preserving anonymization for health data publishing.
Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn
2017-07-11
Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.
Atmospheric radiance interpolation for the modeling of hyperspectral data
NASA Astrophysics Data System (ADS)
Fuehrer, Perry; Healey, Glenn; Rauch, Brian; Slater, David; Ratkowski, Anthony
2008-04-01
The calibration of data from hyperspectral sensors to spectral radiance enables the use of physical models to predict measured spectra. Since environmental conditions are often unknown, material detection algorithms have emerged that utilize predicted spectra over ranges of environmental conditions. The predicted spectra are typically generated by a radiative transfer (RT) code such as MODTRAN TM. Such techniques require the specification of a set of environmental conditions. This is particularly challenging in the LWIR for which temperature and atmospheric constituent profiles are required as inputs for the RT codes. We have developed an automated method for generating environmental conditions to obtain a desired sampling of spectra in the sensor radiance domain. Our method provides a way of eliminating the usual problems encountered, because sensor radiance spectra depend nonlinearly on the environmental parameters, when model conditions are specified by a uniform sampling of environmental parameters. It uses an initial set of radiance vectors concatenated over a set of conditions to define the mapping from environmental conditions to sensor spectral radiance. This approach enables a given number of model conditions to span the space of desired radiance spectra and improves both the accuracy and efficiency of detection algorithms that rely upon use of predicted spectra.
Ground-Based Correction of Remote-Sensing Spectral Imagery
NASA Technical Reports Server (NTRS)
Alder-Golden, Steven M.; Rochford, Peter; Matthew, Michael; Berk, Alexander
2007-01-01
Software has been developed for an improved method of correcting for the atmospheric optical effects (primarily, effects of aerosols and water vapor) in spectral images of the surface of the Earth acquired by airborne and spaceborne remote-sensing instruments. In this method, the variables needed for the corrections are extracted from the readings of a radiometer located on the ground in the vicinity of the scene of interest. The software includes algorithms that analyze measurement data acquired from a shadow-band radiometer. These algorithms are based on a prior radiation transport software model, called MODTRAN, that has been developed through several versions up to what are now known as MODTRAN4 and MODTRAN5 . These components have been integrated with a user-friendly Interactive Data Language (IDL) front end and an advanced version of MODTRAN4. Software tools for handling general data formats, performing a Langley-type calibration, and generating an output file of retrieved atmospheric parameters for use in another atmospheric-correction computer program known as FLAASH have also been incorporated into the present soft-ware. Concomitantly with the soft-ware described thus far, there has been developed a version of FLAASH that utilizes the retrieved atmospheric parameters to process spectral image data.
Utilizing Machine Learning for Analysis of Tiara for Texas
NASA Astrophysics Data System (ADS)
van Slycke, Jacqueline; Christian, Greg, , Dr.
2017-09-01
The Tiara for Texas detector at Texas A&M University consists of a target chamber housing an array of silicon detectors and surrounded by four high purity germanium clovers that generate voltage pulses proportional to detected gamma ray energies. While some radiation is fully absorbed in one photopeak, others undergo Compton scattering between detectors. This process is thoroughly simulated in GEANT4. Machine learning with scikit-learn allows for the reconstruction of scattered photons to the original energy of the incident gamma ray. In a given simulation, a defined number of rays are emitted from the source. Each ray is marked as an event and its path is tracked. Scikit-learn uses the events' paths to train an algorithm, which recognizes which events should be summed to reconstruct the full gamma ray energy and additional events to test the algorithm. These predictions are not exact, but were analyzed to further understand any discrepancies and increase the effectiveness of the simulation. The results from this research project compare various machine learning techniques to determine which methods should be expanded on in the future. National Science Foundation Grant PHY-1659847 and United States Department of Energy Grant DE-FG02-93ER40773.
The role of advanced reconstruction algorithms in cardiac CT
Halliburton, Sandra S.; Tanabe, Yuki; Partovi, Sasan
2017-01-01
Non-linear iterative reconstruction (IR) algorithms have been increasingly incorporated into clinical cardiac CT protocols at institutions around the world. Multiple IR algorithms are available commercially from various vendors. IR algorithms decrease image noise and are primarily used to enable lower radiation dose protocols. IR can also be used to improve image quality for imaging of obese patients, coronary atherosclerotic plaques, coronary stents, and myocardial perfusion. In this article, we will review the various applications of IR algorithms in cardiac imaging and evaluate how they have changed practice. PMID:29255694
A hybrid SVM-FFA method for prediction of monthly mean global solar radiation
NASA Astrophysics Data System (ADS)
Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; Zamani, Mazdak; Motamedi, Shervin; Ch, Sudheer
2016-07-01
In this study, a hybrid support vector machine-firefly optimization algorithm (SVM-FFA) model is proposed to estimate monthly mean horizontal global solar radiation (HGSR). The merit of SVM-FFA is assessed statistically by comparing its performance with three previously used approaches. Using each approach and long-term measured HGSR, three models are calibrated by considering different sets of meteorological parameters measured for Bandar Abbass situated in Iran. It is found that the model (3) utilizing the combination of relative sunshine duration, difference between maximum and minimum temperatures, relative humidity, water vapor pressure, average temperature, and extraterrestrial solar radiation shows superior performance based upon all approaches. Moreover, the extraterrestrial radiation is introduced as a significant parameter to accurately estimate the global solar radiation. The survey results reveal that the developed SVM-FFA approach is greatly capable to provide favorable predictions with significantly higher precision than other examined techniques. For the SVM-FFA (3), the statistical indicators of mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), and coefficient of determination ( R 2) are 3.3252 %, 0.1859 kWh/m2, 3.7350 %, and 0.9737, respectively which according to the RRMSE has an excellent performance. As a more evaluation of SVM-FFA (3), the ratio of estimated to measured values is computed and found that 47 out of 48 months considered as testing data fall between 0.90 and 1.10. Also, by performing a further verification, it is concluded that SVM-FFA (3) offers absolute superiority over the empirical models using relatively similar input parameters. In a nutshell, the hybrid SVM-FFA approach would be considered highly efficient to estimate the HGSR.
Report on the PWR-radiation protection/ALARA Committee
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malone, D.J.
1995-03-01
In 1992, representatives from several utilities with operational Pressurized Water Reactors (PWR) formed the PWR-Radiation Protection/ALARA Committee. The mission of the Committee is to facilitate open communications between member utilities relative to radiation protection and ALARA issues such that cost effective dose reduction and radiation protection measures may be instituted. While industry deregulation appears inevitable and inter-utility competition is on the rise, Committee members are fully committed to sharing both positive and negative experiences for the benefit of the health and safety of the radiation worker. Committee meetings provide current operational experiences through members providing Plant status reports, and informationmore » relative to programmatic improvements through member presentations and topic specific workshops. The most recent Committee workshop was facilitated to provide members with defined experiences that provide cost effective ALARA performance.« less
Data Fusion for a Vision-Radiological System for Source Tracking and Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enqvist, Andreas; Koppal, Sanjeev
2015-07-01
A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less
Effect of Thin Cirrus Clouds on Dust Optical Depth Retrievals From MODIS Observations
NASA Technical Reports Server (NTRS)
Feng, Qian; Hsu, N. Christina; Yang, Ping; Tsay, Si-Chee
2011-01-01
The effect of thin cirrus clouds in retrieving the dust optical depth from MODIS observations is investigated by using a simplified aerosol retrieval algorithm based on the principles of the Deep Blue aerosol property retrieval method. Specifically, the errors of the retrieved dust optical depth due to thin cirrus contamination are quantified through the comparison of two retrievals by assuming dust-only atmospheres and the counterparts with overlapping mineral dust and thin cirrus clouds. To account for the effect of the polarization state of radiation field on radiance simulation, a vector radiative transfer model is used to generate the lookup tables. In the forward radiative transfer simulations involved in generating the lookup tables, the Rayleigh scattering by atmospheric gaseous molecules and the reflection of the surface assumed to be Lambertian are fully taken into account. Additionally, the spheroid model is utilized to account for the nonsphericity of dust particles In computing their optical properties. For simplicity, the single-scattering albedo, scattering phase matrix, and optical depth are specified a priori for thin cirrus clouds assumed to consist of droxtal ice crystals. The present results indicate that the errors in the retrieved dust optical depths due to the contamination of thin cirrus clouds depend on the scattering angle, underlying surface reflectance, and dust optical depth. Under heavy dusty conditions, the absolute errors are comparable to the predescribed optical depths of thin cirrus clouds.
A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature
NASA Astrophysics Data System (ADS)
Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min
2017-05-01
This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.
An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.
2000-01-01
A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.
Fast and stable algorithms for computing the principal square root of a complex matrix
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Lian, Sui R.; Mcinnis, Bayliss C.
1987-01-01
This note presents recursive algorithms that are rapidly convergent and more stable for finding the principal square root of a complex matrix. Also, the developed algorithms are utilized to derive the fast and stable matrix sign algorithms which are useful in developing applications to control system problems.
Operation of a voltage source converter at increased utility voltage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaura, V.; Blasko, V.
1997-01-01
The operation of a voltage source converter (VSC) with regeneration capability, controllable power factor, and low distortion of utility currents is analyzed at increased utility voltage. Increase in the utility voltage causes a VSC to saturate and enter a nonlinear mode of operation. To operate under elevated utility, two steps are taken: (1) a pulse width modulation (PWM) algorithm is implemented which extends the linear region of operation by 15% and (2) a PWM saturation regulator is used to control the reactive current at higher utility voltages. The PWM algorithm reduces the switching losses by at least 33% and themore » effect of blanking time by one-third. All analytical results are experimentally verified on a 100 kW three-phase VSC.« less
NASA Astrophysics Data System (ADS)
Agjee, Na'eem Hoosen; Ismail, Riyad; Mutanga, Onisimo
2016-10-01
Water hyacinth plants (Eichhornia crassipes) are threatening freshwater ecosystems throughout Africa. The Neochetina spp. weevils are seen as an effective solution that can combat the proliferation of the invasive alien plant. We aimed to determine if multitemporal hyperspectral data could be utilized to detect the efficacy of the biocontrol agent. The random forest (RF) algorithm was used to classify variable infestation levels for 6 weeks using: (1) all the hyperspectral bands, (2) bands selected by the recursive feature elimination (RFE) algorithm, and (3) bands selected by the Boruta algorithm. Results showed that the RF model using all the bands successfully produced low-classification errors (12.50% to 32.29%) for all 6 weeks. However, the RF model using Boruta selected bands produced lower classification errors (8.33% to 15.62%) than the RF model using all the bands or bands selected by the RFE algorithm (11.25% to 21.25%) for all 6 weeks, highlighting the utility of Boruta as an all relevant band selection algorithm. All relevant bands selected by Boruta included: 352, 754, 770, 771, 775, 781, 782, 783, 786, and 789 nm. It was concluded that RF coupled with Boruta band-selection algorithm can be utilized to undertake multitemporal monitoring of variable infestation levels on water hyacinth plants.
Brunner, Stephen; Nett, Brian E; Tolakanahalli, Ranjini; Chen, Guang-Hong
2011-02-21
X-ray scatter is a significant problem in cone-beam computed tomography when thicker objects and larger cone angles are used, as scattered radiation can lead to reduced contrast and CT number inaccuracy. Advances have been made in x-ray computed tomography (CT) by incorporating a high quality prior image into the image reconstruction process. In this paper, we extend this idea to correct scatter-induced shading artifacts in cone-beam CT image-guided radiation therapy. Specifically, this paper presents a new scatter correction algorithm which uses a prior image with low scatter artifacts to reduce shading artifacts in cone-beam CT images acquired under conditions of high scatter. The proposed correction algorithm begins with an empirical hypothesis that the target image can be written as a weighted summation of a series of basis images that are generated by raising the raw cone-beam projection data to different powers, and then, reconstructing using the standard filtered backprojection algorithm. The weight for each basis image is calculated by minimizing the difference between the target image and the prior image. The performance of the scatter correction algorithm is qualitatively and quantitatively evaluated through phantom studies using a Varian 2100 EX System with an on-board imager. Results show that the proposed scatter correction algorithm using a prior image with low scatter artifacts can substantially mitigate scatter-induced shading artifacts in both full-fan and half-fan modes.
A method for optimizing the cosine response of solar UV diffusers
NASA Astrophysics Data System (ADS)
Pulli, Tomi; Kärhä, Petri; Ikonen, Erkki
2013-07-01
Instruments measuring global solar ultraviolet (UV) irradiance at the surface of the Earth need to collect radiation from the entire hemisphere. Entrance optics with angular response as close as possible to the ideal cosine response are necessary to perform these measurements accurately. Typically, the cosine response is obtained using a transmitting diffuser. We have developed an efficient method based on a Monte Carlo algorithm to simulate radiation transport in the solar UV diffuser assembly. The algorithm takes into account propagation, absorption, and scattering of the radiation inside the diffuser material. The effects of the inner sidewalls of the diffuser housing, the shadow ring, and the protective weather dome are also accounted for. The software implementation of the algorithm is highly optimized: a simulation of 109 photons takes approximately 10 to 15 min to complete on a typical high-end PC. The results of the simulations agree well with the measured angular responses, indicating that the algorithm can be used to guide the diffuser design process. Cost savings can be obtained when simulations are carried out before diffuser fabrication as compared to a purely trial-and-error-based diffuser optimization. The algorithm was used to optimize two types of detectors, one with a planar diffuser and the other with a spherically shaped diffuser. The integrated cosine errors—which indicate the relative measurement error caused by the nonideal angular response under isotropic sky radiance—of these two detectors were calculated to be f2=1.4% and 0.66%, respectively.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures.
Neylon, J; Sheng, K; Yu, V; Chen, Q; Low, D A; Kupelian, P; Santhanam, A
2014-10-01
Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy into a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.
A nonvoxel-based dose convolution/superposition algorithm optimized for scalable GPU architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neylon, J., E-mail: jneylon@mednet.ucla.edu; Sheng, K.; Yu, V.
Purpose: Real-time adaptive planning and treatment has been infeasible due in part to its high computational complexity. There have been many recent efforts to utilize graphics processing units (GPUs) to accelerate the computational performance and dose accuracy in radiation therapy. Data structure and memory access patterns are the key GPU factors that determine the computational performance and accuracy. In this paper, the authors present a nonvoxel-based (NVB) approach to maximize computational and memory access efficiency and throughput on the GPU. Methods: The proposed algorithm employs a ray-tracing mechanism to restructure the 3D data sets computed from the CT anatomy intomore » a nonvoxel-based framework. In a process that takes only a few milliseconds of computing time, the algorithm restructured the data sets by ray-tracing through precalculated CT volumes to realign the coordinate system along the convolution direction, as defined by zenithal and azimuthal angles. During the ray-tracing step, the data were resampled according to radial sampling and parallel ray-spacing parameters making the algorithm independent of the original CT resolution. The nonvoxel-based algorithm presented in this paper also demonstrated a trade-off in computational performance and dose accuracy for different coordinate system configurations. In order to find the best balance between the computed speedup and the accuracy, the authors employed an exhaustive parameter search on all sampling parameters that defined the coordinate system configuration: zenithal, azimuthal, and radial sampling of the convolution algorithm, as well as the parallel ray spacing during ray tracing. The angular sampling parameters were varied between 4 and 48 discrete angles, while both radial sampling and parallel ray spacing were varied from 0.5 to 10 mm. The gamma distribution analysis method (γ) was used to compare the dose distributions using 2% and 2 mm dose difference and distance-to-agreement criteria, respectively. Accuracy was investigated using three distinct phantoms with varied geometries and heterogeneities and on a series of 14 segmented lung CT data sets. Performance gains were calculated using three 256 mm cube homogenous water phantoms, with isotropic voxel dimensions of 1, 2, and 4 mm. Results: The nonvoxel-based GPU algorithm was independent of the data size and provided significant computational gains over the CPU algorithm for large CT data sizes. The parameter search analysis also showed that the ray combination of 8 zenithal and 8 azimuthal angles along with 1 mm radial sampling and 2 mm parallel ray spacing maintained dose accuracy with greater than 99% of voxels passing the γ test. Combining the acceleration obtained from GPU parallelization with the sampling optimization, the authors achieved a total performance improvement factor of >175 000 when compared to our voxel-based ground truth CPU benchmark and a factor of 20 compared with a voxel-based GPU dose convolution method. Conclusions: The nonvoxel-based convolution method yielded substantial performance improvements over a generic GPU implementation, while maintaining accuracy as compared to a CPU computed ground truth dose distribution. Such an algorithm can be a key contribution toward developing tools for adaptive radiation therapy systems.« less
Veselov, E I
2011-01-01
The article deals with specifying systemic approach to ecologic safety of objects with radiation jeopardy. The authors presented stages of work and algorithm of decisions on preserving reliability of storage for radiation jeopardy waste. Findings are that providing ecologic safety can cover 3 approaches: complete exemption of radiation jeopardy waste, removal of more dangerous waste from present buildings and increasing reliability of prolonged localization of radiation jeopardy waste at the initial place. The systemic approach presented could be realized at various radiation jeopardy objects.
Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.
2018-01-01
The analysis of measured data plays a significant role in enhancing nuclear nonproliferation mainly by inferring the presence of patterns associated with special nuclear materials. Among various types of measurements, gamma-ray spectra is the widest utilized type of data in nonproliferation applications. In this paper, a method that employs the fireworks algorithm (FWA) for analyzing gamma-ray spectra aiming at detecting gamma signatures is presented. In particular, FWA is utilized to fit a set of known signatures to a measured spectrum by optimizing an objective function, where non-zero coefficients express the detected signatures. FWA is tested on a set of experimentallymore » obtained measurements optimizing various objective functions—MSE, RMSE, Theil-2, MAE, MAPE, MAP—with results exhibiting its potential in providing highly accurate and precise signature detection. Finally and furthermore, FWA is benchmarked against genetic algorithms and multiple linear regression, showing its superiority over those algorithms regarding precision with respect to MAE, MAPE, and MAP measures.« less
LLNL Mercury Project Trinity Open Science Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brantley, Patrick; Dawson, Shawn; McKinley, Scott
2016-04-20
The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less
Wang, Shang-Jui; Hathout, Lara; Malhotra, Usha; Maloney-Patel, Nell; Kilic, Sarah; Poplin, Elizabeth; Jabbour, Salma K
2018-03-15
Rectal cancer predominantly affects patients older than 70 years, with peak incidence at age 80 to 85 years. However, the standard treatment paradigm for rectal cancer oftentimes cannot be feasibly applied to these patients owing to frailty or comorbid conditions. There are currently little information and no treatment guidelines to help direct therapy for patients who are elderly and/or have significant comorbidities, because most are not included or specifically studied in clinical trials. More recently various alternative treatment options have been brought to light that may potentially be utilized in this group of patients. This critical review examines the available literature on alternative therapies for rectal cancer and proposes a treatment algorithm to help guide clinicians in treatment decision making for elderly and comorbid patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Common arc method for diffraction pattern orientation.
Bortel, Gábor; Tegze, Miklós
2011-11-01
Very short pulses of X-ray free-electron lasers opened the way to obtaining diffraction signal from single particles beyond the radiation dose limit. For three-dimensional structure reconstruction many patterns are recorded in the object's unknown orientation. A method is described for the orientation of continuous diffraction patterns of non-periodic objects, utilizing intensity correlations in the curved intersections of the corresponding Ewald spheres, and hence named the common arc orientation method. The present implementation of the algorithm optionally takes into account Friedel's law, handles missing data and is capable of determining the point group of symmetric objects. Its performance is demonstrated on simulated diffraction data sets and verification of the results indicates a high orientation accuracy even at low signal levels. The common arc method fills a gap in the wide palette of orientation methods. © 2011 International Union of Crystallography
Multiagency Urban Search Experiment Detector and Algorithm Test Bed
NASA Astrophysics Data System (ADS)
Nicholson, Andrew D.; Garishvili, Irakli; Peplow, Douglas E.; Archer, Daniel E.; Ray, William R.; Swinney, Mathew W.; Willis, Michael J.; Davidson, Gregory G.; Cleveland, Steven L.; Patton, Bruce W.; Hornback, Donald E.; Peltz, James J.; McLean, M. S. Lance; Plionis, Alexander A.; Quiter, Brian J.; Bandstra, Mark S.
2017-07-01
In order to provide benchmark data sets for radiation detector and algorithm development, a particle transport test bed has been created using experimental data as model input and validation. A detailed radiation measurement campaign at the Combined Arms Collective Training Facility in Fort Indiantown Gap, PA (FTIG), USA, provides sample background radiation levels for a variety of materials present at the site (including cinder block, gravel, asphalt, and soil) using long dwell high-purity germanium (HPGe) measurements. In addition, detailed light detection and ranging data and ground-truth measurements inform model geometry. This paper describes the collected data and the application of these data to create background and injected source synthetic data for an arbitrary gamma-ray detection system using particle transport model detector response calculations and statistical sampling. In the methodology presented here, HPGe measurements inform model source terms while detector response calculations are validated via long dwell measurements using 2"×4"×16" NaI(Tl) detectors at a variety of measurement points. A collection of responses, along with sampling methods and interpolation, can be used to create data sets to gauge radiation detector and algorithm (including detection, identification, and localization) performance under a variety of scenarios. Data collected at the FTIG site are available for query, filtering, visualization, and download at muse.lbl.gov.
Novel approach to engineer strains for simultaneous sugar utilization.
Gawand, Pratish; Hyland, Patrick; Ekins, Andrew; Martin, Vincent J J; Mahadevan, Radhakrishnan
2013-11-01
Use of lignocellulosic biomass as a second generation feedstock in the biofuels industry is a pressing challenge. Among other difficulties in using lignocellulosic biomass, one major challenge is the optimal utilization of both 6-carbon (glucose) and 5-carbon (xylose) sugars by industrial microorganisms. Most industrial microorganisms preferentially utilize glucose over xylose owing to the regulatory phenomenon of carbon catabolite repression (CCR). Microorganisms that can co-utilize glucose and xylose are of considerable interest to the biofuels industry due to their ability to simplify the fermentation processes. However, elimination of CCR in microorganisms is challenging due to the multiple coordinating mechanisms involved. We report a novel algorithm, SIMUP, which finds metabolic engineering strategies to force co-utilization of two sugars, without targeting the regulatory pathways of CCR. Mutants of Escherichia coli based on SIMUP algorithm showed predicted growth phenotypes and co-utilized glucose and xylose; however, consumed the sugars slower than the wild-type. Some solutions identified by the algorithm were based on stoichiometric imbalance and were not obvious from the metabolic network topology. Furthermore, sequencing studies on the genes involved in CCR showed that the mechanism for co-utilization of the sugars could be different from previously known mechanisms. Copyright © 2013 Elsevier Inc. All rights reserved.
Machine learning based cloud mask algorithm driven by radiative transfer modeling
NASA Astrophysics Data System (ADS)
Chen, N.; Li, W.; Tanikawa, T.; Hori, M.; Shimada, R.; Stamnes, K. H.
2017-12-01
Cloud detection is a critically important first step required to derive many satellite data products. Traditional threshold based cloud mask algorithms require a complicated design process and fine tuning for each sensor, and have difficulty over snow/ice covered areas. With the advance of computational power and machine learning techniques, we have developed a new algorithm based on a neural network classifier driven by extensive radiative transfer modeling. Statistical validation results obtained by using collocated CALIOP and MODIS data show that its performance is consistent over different ecosystems and significantly better than the MODIS Cloud Mask (MOD35 C6) during the winter seasons over mid-latitude snow covered areas. Simulations using a reduced number of satellite channels also show satisfactory results, indicating its flexibility to be configured for different sensors.
Generalized field-splitting algorithms for optimal IMRT delivery efficiency.
Kamath, Srijit; Sahni, Sartaj; Li, Jonathan; Ranka, Sanjay; Palta, Jatinder
2007-09-21
Intensity-modulated radiation therapy (IMRT) uses radiation beams of varying intensities to deliver varying doses of radiation to different areas of the tissue. The use of IMRT has allowed the delivery of higher doses of radiation to the tumor and lower doses to the surrounding healthy tissue. It is not uncommon for head and neck tumors, for example, to have large treatment widths that are not deliverable using a single field. In such cases, the intensity matrix generated by the optimizer needs to be split into two or three matrices, each of which may be delivered using a single field. Existing field-splitting algorithms used the pre-specified arbitrary split line or region where the intensity matrix is split along a column, i.e., all rows of the matrix are split along the same column (with or without the overlapping of split fields, i.e., feathering). If three fields result, then the two splits are along the same two columns for all rows. In this paper we study the problem of splitting a large field into two or three subfields with the field width as the only constraint, allowing for an arbitrary overlap of the split fields, so that the total MU efficiency of delivering the split fields is maximized. Proof of optimality is provided for the proposed algorithm. An average decrease of 18.8% is found in the total MUs when compared to the split generated by a commercial treatment planning system and that of 10% is found in the total MUs when compared to the split generated by our previously published algorithm.
Ardley, Nicholas D; Lau, Ken K; Buchan, Kevin
2013-12-01
Cervical spine injuries occur in 4-8 % of adults with head trauma. Dual acquisition technique has been traditionally used for the CT scanning of brain and cervical spine. The purpose of this study was to determine the efficacy of radiation dose reduction by using a single acquisition technique that incorporated both anatomical regions with a dedicated neck detection algorithm. Thirty trauma patients for brain and cervical spine CT were included and were scanned with the single acquisition technique. The radiation doses from the single CT acquisition technique with the neck detection algorithm, which allowed appropriate independent dose administration relevant to brain and cervical spine regions, were recorded. Comparison was made both to the doses calculated from the simulation of the traditional dual acquisitions with matching parameters, and to the doses of retrospective dual acquisition legacy technique with the same sample size. The mean simulated dose for the traditional dual acquisition technique was 3.99 mSv, comparable to the average dose of 4.2 mSv from 30 previous patients who had CT of brain and cervical spine as dual acquisitions. The mean dose from the single acquisition technique was 3.35 mSv, resulting in a 16 % overall dose reduction. The images from the single acquisition technique were of excellent diagnostic quality. The new single acquisition CT technique incorporating the neck detection algorithm for brain and cervical spine significantly reduces the overall radiation dose by eliminating the unavoidable overlapping range between 2 anatomical regions which occurs with the traditional dual acquisition technique.
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Fiorino, Steven
2002-01-01
Coordinated ground, aircraft, and satellite observations are analyzed from the 1999 TRMM Kwajalein Atoll field experiment (KWAJEX) to better understand the relationships between cloud microphysical processes and microwave radiation intensities in the context of physical evaluation of the Level 2 TRMM radiometer rain profile algorithm and uncertainties with its assumed microphysics-radiation relationships. This talk focuses on the results of a multi-dataset analysis based on measurements from KWAJEX surface, air, and satellite platforms to test the hypothesis that uncertainties in the passive microwave radiometer algorithm (TMI 2a12 in the nomenclature of TRMM) are systematically coupled and correlated with the magnitudes of deviation of the assumed 3-dimensional microphysical properties from observed microphysical properties. Re-stated, this study focuses on identifying the weaknesses in the operational TRMM 2a12 radiometer algorithm based on observed microphysics and radiation data in terms of over-simplifications used in its theoretical microphysical underpinnings. The analysis makes use of a common transform coordinate system derived from the measuring capabilities of the aircraft radiometer used to survey the experimental study area, i.e., the 4-channel AMPR radiometer flown on the NASA DC-8 aircraft. Normalized emission and scattering indices derived from radiometer brightness temperatures at the four measuring frequencies enable a 2-dimensional coordinate system that facilities compositing of Kwajalein S-band ground radar reflectivities, ARMAR Ku-band aircraft radar reflectivities, TMI spacecraft radiometer brightness temperatures, PR Ku-band spacecraft radar reflectivities, bulk microphysical parameters derived from the aircraft-mounted cloud microphysics laser probes (including liquid/ice water contents, effective liquid/ice hydrometeor radii, and effective liquid/ice hydrometeor variances), and rainrates derived from any of the individual ground, aircraft, or satellite algorithms applied to the radar or radiometer measurements, or their combination. The results support the study's underlying hypothesis, particularly in context of ice phase processes, in that the cloud regions where the 2a12 algorithm's microphysical database most misrepresents the microphysical conditions as determined by the laser probes, are where retrieved surface rainrates are most erroneous relative to other reference rainrates as determined by ground and aircraft radar. In reaching these conclusions, TMI and PR brightness temperatures and reflectivities have been synthesized from the aircraft AMPR and ARMAR measurements with the analysis conducted in a composite framework to eliminate measurement noise associated with the case study approach and single element volumes obfuscated by heterogeneous beam filling effects. In diagnosing the performance of the 2a12 algorithm, weaknesses have been found in the cloud-radiation database used to provide microphysical guidance to the algorithm for upper cloud ice microphysics. It is also necessary to adjust a fractional convective rainfall factor within the algorithm somewhat arbitrarily to achieve satisfactory algorithm accuracy.
NASA Astrophysics Data System (ADS)
Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue
2018-04-01
Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
Aquarius Salinity Retrieval Algorithm: Final Pre-Launch Version
NASA Technical Reports Server (NTRS)
Wentz, Frank J.; Le Vine, David M.
2011-01-01
This document provides the theoretical basis for the Aquarius salinity retrieval algorithm. The inputs to the algorithm are the Aquarius antenna temperature (T(sub A)) measurements along with a number of NCEP operational products and pre-computed tables of space radiation coming from the galaxy and sun. The output is sea-surface salinity and many intermediate variables required for the salinity calculation. This revision of the Algorithm Theoretical Basis Document (ATBD) is intended to be the final pre-launch version.
AI-BL1.0: a program for automatic on-line beamline optimization using the evolutionary algorithm.
Xi, Shibo; Borgna, Lucas Santiago; Zheng, Lirong; Du, Yonghua; Hu, Tiandou
2017-01-01
In this report, AI-BL1.0, an open-source Labview-based program for automatic on-line beamline optimization, is presented. The optimization algorithms used in the program are Genetic Algorithm and Differential Evolution. Efficiency was improved by use of a strategy known as Observer Mode for Evolutionary Algorithm. The program was constructed and validated at the XAFCA beamline of the Singapore Synchrotron Light Source and 1W1B beamline of the Beijing Synchrotron Radiation Facility.
Cone-Beam Computed Tomography for Image-Guided Radiation Therapy of Prostate Cancer
2008-01-01
forexa t volumetri image re onstru tion. As a onsequense, images re onstru ted by approx-imate algorithms, mostly based on the Feldkamp algorithm...patient dose from CBCT. Reverse heli al CBCT has been developed for exa tre onstru tion of volumetri images, region-of-interest (ROI) re onstru tion...algorithm with a priori informa-tion in few-view CBCT for IGRT. We expe t the proposed algorithm an redu e the numberof proje tions needed for volumetri
NASA Astrophysics Data System (ADS)
De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.
2013-02-01
We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.
NASA Astrophysics Data System (ADS)
Kim, Myeong Seong; Choi, Jiwon; Kim, Sun Young; Kweon, Dae Cheol
2014-03-01
There is a concern regarding the adverse effects of increasing radiation doses due to repeated computed tomography (CT) scans, especially in radiosensitive organs and portions thereof, such as the lenses of the eyes. Bismuth shielding with an adaptive statistical iterative reconstruction (ASIR) algorithm was recently introduced in our clinic as a method to reduce the absorbed radiation dose. This technique was applied to the lens of the eye during CT scans. The purpose of this study was to evaluate the reduction in the absorbed radiation dose and to determine the noise level when using bismuth shielding and the ASIR algorithm with the GE DC 750 HD 64-channel CT scanner for CT of the head of a humanoid phantom. With the use of bismuth shielding, the noise level was higher in the beam-hardening artifact areas than in the revealed artifact areas. However, with the use of ASIR, the noise level was lower than that with the use of bismuth alone; it was also lower in the artifact areas. The reduction in the radiation dose with the use of bismuth was greatest at the surface of the phantom to a limited depth. In conclusion, it is possible to reduce the radiation level and slightly decrease the bismuth-induced noise level by using a combination of ASIR as an algorithm process and bismuth as an in-plane hardware-type shielding method.
Using Nadir and Directional Emissivity as a Probe of Particle Microphysical Properties
NASA Astrophysics Data System (ADS)
Pitman, Karly M.; Wolff, Michael J.; Bandfield, Joshua L.; Clayton, Geoffrey C.
Real surfaces are not expected to be diffuse emitters, thus observed emissivity values of surface dust deposits are a function of viewing geometry. Attempts to model infrared emission spectral profiles of surface dust deposits at nadir have not yet matured to match the sophistication of astrophysical dust radiative transfer codes. In the absence of strong thermal gradients, directional emissivity may be obtained theoretically via a combination of reciprocity and Kirchhoff's Law. Owing to a lack of laboratory data on directional emissivity for comparison, theorists have not explored the potential utility of directional emissivity as a direct probe of surface dust microphysical properties. Motivated by future analyses of MGS/TES emission phase function (EPF) sequences and the upcoming Mars Exploration Rover mini-TES dataset, we explore the effects of dust particle size and composition on observed radiances at nadir and off-nadir geometries in the TES spectral regime using a combination of multiple scattering radiative transfer and Mie scattering algorithms. Comparisons of these simulated spectra to laboratory spectra of standard mineral assemblages will also be made. This work is supported through NASA grant NAGS-9820 (MJW) and LSU Board of Regents (KMP).
Volume-of-interest reconstruction from severely truncated data in dental cone-beam CT
NASA Astrophysics Data System (ADS)
Zhang, Zheng; Kusnoto, Budi; Han, Xiao; Sidky, E. Y.; Pan, Xiaochuan
2015-03-01
As cone-beam computed tomography (CBCT) has gained popularity rapidly in dental imaging applications in the past two decades, radiation dose in CBCT imaging remains a potential, health concern to the patients. It is a common practice in dental CBCT imaging that only a small volume of interest (VOI) containing the teeth of interest is illuminated, thus substantially lowering imaging radiation dose. However, this would yield data with severe truncations along both transverse and longitudinal directions. Although images within the VOI reconstructed from truncated data can be of some practical utility, they often are compromised significantly by truncation artifacts. In this work, we investigate optimization-based reconstruction algorithms for VOI image reconstruction from CBCT data of dental patients containing severe truncations. In an attempt to further reduce imaging dose, we also investigate optimization-based image reconstruction from severely truncated data collected at projection views substantially fewer than those used in clinical dental applications. Results of our study show that appropriately designed optimization-based reconstruction can yield VOI images with reduced truncation artifacts, and that, when reconstructing from only one half, or even one quarter, of clinical data, it can also produce VOI images comparable to that of clinical images.
Recent Developments in the VISRAD 3-D Target Design and Radiation Simulation Code
NASA Astrophysics Data System (ADS)
Macfarlane, Joseph; Golovkin, Igor; Sebald, James
2017-10-01
The 3-D view factor code VISRAD is widely used in designing HEDP experiments at major laser and pulsed-power facilities, including NIF, OMEGA, OMEGA-EP, ORION, Z, and LMJ. It simulates target designs by generating a 3-D grid of surface elements, utilizing a variety of 3-D primitives and surface removal algorithms, and can be used to compute the radiation flux throughout the surface element grid by computing element-to-element view factors and solving power balance equations. Target set-up and beam pointing are facilitated by allowing users to specify positions and angular orientations using a variety of coordinates systems (e.g., that of any laser beam, target component, or diagnostic port). Analytic modeling for laser beam spatial profiles for OMEGA DPPs and NIF CPPs is used to compute laser intensity profiles throughout the grid of surface elements. VISRAD includes a variety of user-friendly graphics for setting up targets and displaying results, can readily display views from any point in space, and can be used to generate image sequences for animations. We will discuss recent improvements to conveniently assess beam capture on target and beam clearance of diagnostic components, as well as plans for future developments.
Objective Assessment of Image Quality VI: Imaging in Radiation Therapy
Barrett, Harrison H.; Kupinski, Matthew A.; Müeller, Stefan; Halpern, Howard J.; Morris, John C.; Dwyer, Roisin
2015-01-01
Earlier work on Objective Assessment of Image Quality (OAIQ) focused largely on estimation or classification tasks in which the desired outcome of imaging is accurate diagnosis. This paper develops a general framework for assessing imaging quality on the basis of therapeutic outcomes rather than diagnostic performance. By analogy to Receiver Operating Characteristic (ROC) curves and their variants as used in diagnostic OAIQ, the method proposed here utilizes the Therapy Operating Characteristic or TOC curves, which are plots of the probability of tumor control vs. the probability of normal-tissue complications as the overall dose level of a radiotherapy treatment is varied. The proposed figure of merit is the area under the TOC curve, denoted AUTOC. This paper reviews an earlier exposition of the theory of TOC and AUTOC, which was specific to the assessment of image-segmentation algorithms, and extends it to other applications of imaging in external-beam radiation treatment as well as in treatment with internal radioactive sources. For each application, a methodology for computing the TOC is presented. A key difference between ROC and TOC is that the latter can be defined for a single patient rather than a population of patients. PMID:24200954
The Capabilities and Applications of FY-3A/B SEM on Monitoring Space Weather Events
NASA Astrophysics Data System (ADS)
Huang, C.; Li, J.; Yu, T.; Xue, B.; Wang, C.; Zhang, X.; Cao, G.; Liu, D.; Tang, W.
2012-12-01
The Space Environment Monitor (SEM), on board the Chinese meteorological satellites, FengYun-3A/B has the abilities to measure proton flux in 3-300 Mev energy range and electron flux in 0.15-5.7 Mev energy range. SEM can also detect the heavy ion compositions, satellite surface potential, the radiation dose in sensors, and the single events. The space environment information derived from SEM can be utilized for satellite security designs, scientific studies, development of radiation belt models, and space weather monitoring and disaster warning. In this study, the SEM's instrument characteristics are introduced and the post-launch calibration algorithm is presented. The applications in monitoring space weather events and the service for manned spaceflights are also demonstrated.; The protons with particle energy over 10 Mev are called "killer particles". These particles may damage the satellite and cause disruption of satellite's system. The protons flux of 10 M-26 Mev energy band reached 5000 in the SPE caused by a solar flare with CME during the period of 2012.01.23 to 2012.01.27 as shown in the figure. THE COMPARISONS OF HEAVY IONS (2010.11.11-2010.12.15)t;
Zhang, Hua; Huang, Jing; Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan
2014-09-01
Repeated X-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the X-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as "PWLS-PINL". Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive overrelaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection, and edge detail preservation.
Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan
2014-01-01
Repeated x-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the x-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as “PWLS-PINL”. Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive over-relaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection and edge detail preservation. PMID:24235272
Multigroup Radiation-Hydrodynamics with a High-Order, Low-Order Method
Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron; ...
2016-12-09
Recent efforts at Los Alamos National Laboratory to develop a moment-based, scale-bridging [or high-order (HO)–low-order (LO)] algorithm for solving large varieties of the transport (kinetic) systems have shown promising results. A part of our ongoing effort is incorporating this methodology into the framework of the Eulerian Applications Project to achieve algorithmic acceleration of radiationhydrodynamics simulations in production software. By starting from the thermal radiative transfer equations with a simple material-motion correction, we derive a discretely consistent energy balance equation (LO equation). We demonstrate that the corresponding LO system for the Monte Carlo HO solver is closely related to the originalmore » LO system without material-motion corrections. We test the implementation on a radiative shock problem and show consistency between the energy densities and temperatures in the HO and LO solutions as well as agreement with the semianalytic solution. We also test the approach on a more challenging two-dimensional problem and demonstrate accuracy enhancements and algorithmic speedups. This paper extends a recent conference paper by including multigroup effects.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Le Hardy, D.; Favennec, Y., E-mail: yann.favennec@univ-nantes.fr; Rousseau, B.
The contribution of this paper relies in the development of numerical algorithms for the mathematical treatment of specular reflection on borders when dealing with the numerical solution of radiative transfer problems. The radiative transfer equation being integro-differential, the discrete ordinates method allows to write down a set of semi-discrete equations in which weights are to be calculated. The calculation of these weights is well known to be based on either a quadrature or on angular discretization, making the use of such method straightforward for the state equation. Also, the diffuse contribution of reflection on borders is usually well taken intomore » account. However, the calculation of accurate partition ratio coefficients is much more tricky for the specular condition applied on arbitrary geometrical borders. This paper presents algorithms that calculate analytically partition ratio coefficients needed in numerical treatments. The developed algorithms, combined with a decentered finite element scheme, are validated with the help of comparisons with analytical solutions before being applied on complex geometries.« less
Mapping Global Ocean Surface Albedo from Satellite Observations: Models, Algorithms, and Datasets
NASA Astrophysics Data System (ADS)
Li, X.; Fan, X.; Yan, H.; Li, A.; Wang, M.; Qu, Y.
2018-04-01
Ocean surface albedo (OSA) is one of the important parameters in surface radiation budget (SRB). It is usually considered as a controlling factor of the heat exchange among the atmosphere and ocean. The temporal and spatial dynamics of OSA determine the energy absorption of upper level ocean water, and have influences on the oceanic currents, atmospheric circulations, and transportation of material and energy of hydrosphere. Therefore, various parameterizations and models have been developed for describing the dynamics of OSA. However, it has been demonstrated that the currently available OSA datasets cannot full fill the requirement of global climate change studies. In this study, we present a literature review on mapping global OSA from satellite observations. The models (parameterizations, the coupled ocean-atmosphere radiative transfer (COART), and the three component ocean water albedo (TCOWA)), algorithms (the estimation method based on reanalysis data, and the direct-estimation algorithm), and datasets (the cloud, albedo and radiation (CLARA) surface albedo product, dataset derived by the TCOWA model, and the global land surface satellite (GLASS) phase-2 surface broadband albedo product) of OSA have been discussed, separately.
Fractal Complexity-Based Feature Extraction Algorithm of Communication Signals
NASA Astrophysics Data System (ADS)
Wang, Hui; Li, Jingchao; Guo, Lili; Dou, Zheng; Lin, Yun; Zhou, Ruolin
How to analyze and identify the characteristics of radiation sources and estimate the threat level by means of detecting, intercepting and locating has been the central issue of electronic support in the electronic warfare, and communication signal recognition is one of the key points to solve this issue. Aiming at accurately extracting the individual characteristics of the radiation source for the increasingly complex communication electromagnetic environment, a novel feature extraction algorithm for individual characteristics of the communication radiation source based on the fractal complexity of the signal is proposed. According to the complexity of the received signal and the situation of environmental noise, use the fractal dimension characteristics of different complexity to depict the subtle characteristics of the signal to establish the characteristic database, and then identify different broadcasting station by gray relation theory system. The simulation results demonstrate that the algorithm can achieve recognition rate of 94% even in the environment with SNR of -10dB, and this provides an important theoretical basis for the accurate identification of the subtle features of the signal at low SNR in the field of information confrontation.
Sliding mode fault tolerant control dealing with modeling uncertainties and actuator faults.
Wang, Tao; Xie, Wenfang; Zhang, Youmin
2012-05-01
In this paper, two sliding mode control algorithms are developed for nonlinear systems with both modeling uncertainties and actuator faults. The first algorithm is developed under an assumption that the uncertainty bounds are known. Different design parameters are utilized to deal with modeling uncertainties and actuator faults, respectively. The second algorithm is an adaptive version of the first one, which is developed to accommodate uncertainties and faults without utilizing exact bounds information. The stability of the overall control systems is proved by using a Lyapunov function. The effectiveness of the developed algorithms have been verified on a nonlinear longitudinal model of Boeing 747-100/200. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Evaluation of Long-term Aerosol Data Records from SeaWiFS over Land and Ocean
NASA Astrophysics Data System (ADS)
Bettenhausen, C.; Hsu, C.; Jeong, M.; Huang, J.
2010-12-01
Deserts around the globe produce mineral dust aerosols that may then be transported over cities, across continents, or even oceans. These aerosols affect the Earth’s energy balance through direct and indirect interactions with incoming solar radiation. They also have a biogeochemical effect as they deliver scarce nutrients to remote ecosystems. Large dust storms regularly disrupt air traffic and are a general nuisance to those living in transport regions. In the past, measuring dust aerosols has been incomplete at best. Satellite retrieval algorithms were limited to oceans or vegetated surfaces and typically neglected desert regions due to their high surface reflectivity in the mid-visible and near-infrared wavelengths, which have been typically used for aerosol retrievals. The Deep Blue aerosol retrieval algorithm was developed to resolve these shortcomings by utilizing the blue channels from instruments such as the Sea-Viewing Wide-Field-of-View Sensor (SeaWiFS) and the Moderate Resolution Imaging Spectroradiometer (MODIS) to infer aerosol properties over these highly reflective surfaces. The surface reflectivity of desert regions is much lower in the blue channels and thus it is easier to separate the aerosol and surface signals than at the longer wavelengths used in other algorithms. More recently, the Deep Blue algorithm has been expanded to retrieve over vegetated surfaces and oceans as well. A single algorithm can now follow dust from source to sink. In this work, we introduce the SeaWiFS instrument and the Deep Blue aerosol retrieval algorithm. We have produced global aerosol data records over land and ocean from 1997 through 2009 using the Deep Blue algorithm and SeaWiFS data. We describe these data records and validate them with data from the Aerosol Robotic Network (AERONET). We also show the relative performance compared to the current MODIS Deep Blue operational aerosol data in desert regions. The current results are encouraging and this dataset will be useful to future studies in understanding the effects of dust aerosols on global processes, long-term aerosol trends, quantifying dust emissions, transport, and inter-annual variability.
Radiation effects in reconfigurable FPGAs
NASA Astrophysics Data System (ADS)
Quinn, Heather
2017-04-01
Field-programmable gate arrays (FPGAs) are co-processing hardware used in image and signal processing. FPGA are programmed with custom implementations of an algorithm. These algorithms are highly parallel hardware designs that are faster than software implementations. This flexibility and speed has made FPGAs attractive for many space programs that need in situ, high-speed signal processing for data categorization and data compression. Most commercial FPGAs are affected by the space radiation environment, though. Problems with TID has restricted the use of flash-based FPGAs. Static random access memory based FPGAs must be mitigated to suppress errors from single-event upsets. This paper provides a review of radiation effects issues in reconfigurable FPGAs and discusses methods for mitigating these problems. With careful design it is possible to use these components effectively and resiliently.
Solar Modulation of Inner Trapped Belt Radiation Flux as a Function of Atmospheric Density
NASA Technical Reports Server (NTRS)
Lodhi, M. A. K.
2005-01-01
No simple algorithm seems to exist for calculating proton fluxes and lifetimes in the Earth's inner, trapped radiation belt throughout the solar cycle. Most models of the inner trapped belt in use depend upon AP8 which only describes the radiation environment at solar maximum and solar minimum in Cycle 20. One exception is NOAAPRO which incorporates flight data from the TIROS/NOAA polar orbiting spacecraft. The present study discloses yet another, simple formulation for approximating proton fluxes at any time in a given solar cycle, in particular between solar maximum and solar minimum. It is derived from AP8 using a regression algorithm technique from nuclear physics. From flux and its time integral fluence, one can then approximate dose rate and its time integral dose.
Confidence limits for Neyman type A-distributed events.
Morand, Josselin; Deperas-Standylo, Joanna; Urbanik, Witold; Moss, Raymond; Hachem, Sabet; Sauerwein, Wolfgang; Wojcik, Andrzej
2008-01-01
The Neyman type A distribution, a generalised, 'contagious' Poisson distribution, finds application in a number of disciplines such as biology, physics and economy. In radiation biology, it best describes the distribution of chromosomal aberrations in cells that were exposed to neutrons, alpha radiations or heavy ions. Intriguingly, no method has been developed for the calculation of confidence limits (CLs) of Neyman type A-distributed events. Here, an algorithm to calculate the 95% CL of Neyman type A-distributed events is presented. Although it has been developed in response to the requirements of radiation biology, it can find application in other fields of research. The algorithm has been implemented in a PC-based computer program that can be downloaded, free of charge, from www.pu.kielce.pl/ibiol/neta.
Validation of classification algorithms for childhood diabetes identified from administrative data.
Vanderloo, Saskia E; Johnson, Jeffrey A; Reimer, Kim; McCrea, Patrick; Nuernberger, Kimberly; Krueger, Hans; Aydede, Sema K; Collet, Jean-Paul; Amed, Shazhan
2012-05-01
Type 1 diabetes is the most common form of diabetes among children; however, the proportion of cases of childhood type 2 diabetes is increasing. In Canada, the National Diabetes Surveillance System (NDSS) uses administrative health data to describe trends in the epidemiology of diabetes, but does not specify diabetes type. The objective of this study was to validate algorithms to classify diabetes type in children <20 yr identified using the NDSS methodology. We applied the NDSS case definition to children living in British Columbia between 1 April 1996 and 31 March 2007. Through an iterative process, four potential classification algorithms were developed based on demographic characteristics and drug-utilization patterns. Each algorithm was then validated against a gold standard clinical database. Algorithms based primarily on an age rule (i.e., age <10 at diagnosis categorized type 1 diabetes) were most sensitive in the identification of type 1 diabetes; algorithms with restrictions on drug utilization (i.e., no prescriptions for insulin ± glucose monitoring strips categorized type 2 diabetes) were most sensitive for identifying type 2 diabetes. One algorithm was identified as having the optimal balance of sensitivity (Sn) and specificity (Sp) for the identification of both type 1 (Sn: 98.6%; Sp: 78.2%; PPV: 97.8%) and type 2 diabetes (Sn: 83.2%; Sp: 97.5%; PPV: 73.7%). Demographic characteristics in combination with drug-utilization patterns can be used to differentiate diabetes type among cases of pediatric diabetes identified within administrative health databases. Validation of similar algorithms in other regions is warranted. © 2011 John Wiley & Sons A/S.
Optimization of self-interstitial clusters in 3C-SiC with genetic algorithm
NASA Astrophysics Data System (ADS)
Ko, Hyunseok; Kaczmarowski, Amy; Szlufarska, Izabela; Morgan, Dane
2017-08-01
Under irradiation, SiC develops damage commonly referred to as black spot defects, which are speculated to be self-interstitial atom clusters. To understand the evolution of these defect clusters and their impacts (e.g., through radiation induced swelling) on the performance of SiC in nuclear applications, it is important to identify the cluster composition, structure, and shape. In this work the genetic algorithm code StructOpt was utilized to identify groundstate cluster structures in 3C-SiC. The genetic algorithm was used to explore clusters of up to ∼30 interstitials of C-only, Si-only, and Si-C mixtures embedded in the SiC lattice. We performed the structure search using Hamiltonians from both density functional theory and empirical potentials. The thermodynamic stability of clusters was investigated in terms of their composition (with a focus on Si-only, C-only, and stoichiometric) and shape (spherical vs. planar), as a function of the cluster size (n). Our results suggest that large Si-only clusters are likely unstable, and clusters are predominantly C-only for n ≤ 10 and stoichiometric for n > 10. The results imply that there is an evolution of the shape of the most stable clusters, where small clusters are stable in more spherical geometries while larger clusters are stable in more planar configurations. We also provide an estimated energy vs. size relationship, E(n), for use in future analysis.
Validation of Local-Cloud Model Outputs With the GOES Satellite Imagery
NASA Astrophysics Data System (ADS)
Malek, E.
2005-05-01
Clouds (visible aggregations of minute droplets of water or tiny crystals of ice suspended in the air) affect the radiation budget of our planet by reflecting, absorbing and scattering solar radiation, and the re-emission of terrestrial radiation. They affect the weather and climate by positive or negative feedbacks. Many researchers have worked on the parameterization of clouds and their effects on the radiation budget. There is little information about ground-based approaches for continuous evaluation of cloud, such as cloud base height, cloud base temperature, and cloud coverage, at local and regional scales. This present article deals with the development of an algorithm for continuous (day and night) evaluation of cloud base temperature, cloud base height and percent of skies covered by cloud at local scale throughout the year. The Vaisala model CT-12K laser beam ceilometer is used at the Automated Surface Observing Systems (ASOS) to measure the cloud base height and report the sky conditions on an hourly basis or at shorter intervals. This laser ceilometer is a fixed-type whose transmitter and receiver point straight up at the cloud (if any) base. It is unable to measure clouds that are not above the sensor. To report cloudiness at the local scale, many of these type of ceilometers are needed. This is not a perfect method for cloud measurement. A single cloud hanging overhead the sensor will cause overcast readings, whereas, a hole in the clouds could cause a clear reading to be reported. To overcome this problem, we have set up a ventilated radiation station at Logan-Cache airport, Utah, U.S.A., since 1995, which is equipped with one of the above-mentioned ceilometers. This radiation station (composed of pyranometers, pyrgeometers and net radiometer) provides continuous measurements of incoming and outgoing shortwave and longwave radiation and the net radiation throughout the year. We have also measured the surface temperature and pressure, the 2-m air temperature and humidity, precipitation, and the 3-m wind and direction at this station. Having the air temperature, moisture, and the measured cloudless incoming longwave (atmospheric) radiation during 1999 through 2004, based upon the ASOS and the algorithm data, we found the appropriate formula (among four reported approaches) for computation of the cloudless-skies atmospheric emissivity. Considering the additional longwave radiation captured by the facing-up pyrgeometer during the cloudy skies, coming from the cloud in the wave band which the gaseous emission lacks (from 8-13 ìm), we developed an algorithm which provides the continuous 20-min cloud information (cloud base height, cloud base temperature, and percent of skies covered by cloud) over the Cache Valley during day and night throughout the year. The comparisons between the ASOS and the algorithm data during the period of 8-12 June, 2004 are reported in this article. The proposed algorithm is a promising approach for evaluation of the cloud base temperature, cloud base height, and percent of skies covered by cloud at the local scale throughout the year. It also reports the comparison between model outputs and GOES 10 satellite images.
Generalized source Finite Volume Method for radiative transfer equation in participating media
NASA Astrophysics Data System (ADS)
Zhang, Biao; Xu, Chuan-Long; Wang, Shi-Min
2017-03-01
Temperature monitoring is very important in a combustion system. In recent years, non-intrusive temperature reconstruction has been explored intensively on the basis of calculating arbitrary directional radiative intensities. In this paper, a new method named Generalized Source Finite Volume Method (GSFVM) was proposed. It was based on radiative transfer equation and Finite Volume Method (FVM). This method can be used to calculate arbitrary directional radiative intensities and is proven to be accurate and efficient. To verify the performance of this method, six test cases of 1D, 2D, and 3D radiative transfer problems were investigated. The numerical results show that the efficiency of this method is close to the radial basis function interpolation method, but the accuracy and stability is higher than that of the interpolation method. The accuracy of the GSFVM is similar to that of the Backward Monte Carlo (BMC) algorithm, while the time required by the GSFVM is much shorter than that of the BMC algorithm. Therefore, the GSFVM can be used in temperature reconstruction and improvement on the accuracy of the FVM.
Plural-wavelength flame detector that discriminates between direct and reflected radiation
NASA Technical Reports Server (NTRS)
Hall, Gregory H. (Inventor); Barnes, Heidi L. (Inventor); Medelius, Pedro J. (Inventor); Simpson, Howard J. (Inventor); Smith, Harvey S. (Inventor)
1997-01-01
A flame detector employs a plurality of wavelength selective radiation detectors and a digital signal processor programmed to analyze each of the detector signals, and determine whether radiation is received directly from a small flame source that warrants generation of an alarm. The processor's algorithm employs a normalized cross-correlation analysis of the detector signals to discriminate between radiation received directly from a flame and radiation received from a reflection of a flame to insure that reflections will not trigger an alarm. In addition, the algorithm employs a Fast Fourier Transform (FFT) frequency spectrum analysis of one of the detector signals to discriminate between flames of different sizes. In a specific application, the detector incorporates two infrared (IR) detectors and one ultraviolet (UV) detector for discriminating between a directly sensed small hydrogen flame, and reflections from a large hydrogen flame. The signals generated by each of the detectors are sampled and digitized for analysis by the digital signal processor, preferably 250 times a second. A sliding time window of approximately 30 seconds of detector data is created using FIFO memories.
Sort-Mid tasks scheduling algorithm in grid computing.
Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M
2015-11-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.
Sort-Mid tasks scheduling algorithm in grid computing
Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.
2014-01-01
Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937
Tang, Wenming; Liu, Guixiong; Li, Yuzhong; Tan, Daji
2017-01-01
High data transmission efficiency is a key requirement for an ultrasonic phased array with multi-group ultrasonic sensors. Here, a novel FIFOs scheduling algorithm was proposed and the data transmission efficiency with hardware technology was improved. This algorithm includes FIFOs as caches for the ultrasonic scanning data obtained from the sensors with the output data in a bandwidth-sharing way, on the basis of which an optimal length ratio of all the FIFOs is achieved, allowing the reading operations to be switched among all the FIFOs without time slot waiting. Therefore, this algorithm enhances the utilization ratio of the reading bandwidth resources so as to obtain higher efficiency than the traditional scheduling algorithms. The reliability and validity of the algorithm are substantiated after its implementation in the field programmable gate array (FPGA) technology, and the bandwidth utilization ratio and the real-time performance of the ultrasonic phased array are enhanced. PMID:29035345
Dynamic fair node spectrum allocation for ad hoc networks using random matrices
NASA Astrophysics Data System (ADS)
Rahmes, Mark; Lemieux, George; Chester, Dave; Sonnenberg, Jerry
2015-05-01
Dynamic Spectrum Access (DSA) is widely seen as a solution to the problem of limited spectrum, because of its ability to adapt the operating frequency of a radio. Mobile Ad Hoc Networks (MANETs) can extend high-capacity mobile communications over large areas where fixed and tethered-mobile systems are not available. In one use case with high potential impact, cognitive radio employs spectrum sensing to facilitate the identification of allocated frequencies not currently accessed by their primary users. Primary users own the rights to radiate at a specific frequency and geographic location, while secondary users opportunistically attempt to radiate at a specific frequency when the primary user is not using it. We populate a spatial radio environment map (REM) database with known information that can be leveraged in an ad hoc network to facilitate fair path use of the DSA-discovered links. Utilization of high-resolution geospatial data layers in RF propagation analysis is directly applicable. Random matrix theory (RMT) is useful in simulating network layer usage in nodes by a Wishart adjacency matrix. We use the Dijkstra algorithm for discovering ad hoc network node connection patterns. We present a method for analysts to dynamically allocate node-node path and link resources using fair division. User allocation of limited resources as a function of time must be dynamic and based on system fairness policies. The context of fair means that first available request for an asset is not envied as long as it is not yet allocated or tasked in order to prevent cycling of the system. This solution may also save money by offering a Pareto efficient repeatable process. We use a water fill queue algorithm to include Shapley value marginal contributions for allocation.
Comparison of different methods to retrieve optical-equivalent snow grain size in central Antarctica
NASA Astrophysics Data System (ADS)
Carlsen, Tim; Birnbaum, Gerit; Ehrlich, André; Freitag, Johannes; Heygster, Georg; Istomina, Larysa; Kipfstuhl, Sepp; Orsi, Anaïs; Schäfer, Michael; Wendisch, Manfred
2017-11-01
The optical-equivalent snow grain size affects the reflectivity of snow surfaces and, thus, the local surface energy budget in particular in polar regions. Therefore, the specific surface area (SSA), from which the optical snow grain size is derived, was observed for a 2-month period in central Antarctica (Kohnen research station) during austral summer 2013/14. The data were retrieved on the basis of ground-based spectral surface albedo measurements collected by the COmpact RAdiation measurement System (CORAS) and airborne observations with the Spectral Modular Airborne Radiation measurement sysTem (SMART). The snow grain size and pollution amount (SGSP) algorithm, originally developed to analyze spaceborne reflectance measurements by the MODerate Resolution Imaging Spectroradiometer (MODIS), was modified in order to reduce the impact of the solar zenith angle on the retrieval results and to cover measurements in overcast conditions. Spectral ratios of surface albedo at 1280 and 1100 nm wavelength were used to reduce the retrieval uncertainty. The retrieval was applied to the ground-based and airborne observations and validated against optical in situ observations of SSA utilizing an IceCube device. The SSA retrieved from CORAS observations varied between 27 and 89 m2 kg-1. Snowfall events caused distinct relative maxima of the SSA which were followed by a gradual decrease in SSA due to snow metamorphism and wind-induced transport of freshly fallen ice crystals. The ability of the modified algorithm to include measurements in overcast conditions improved the data coverage, in particular at times when precipitation events occurred and the SSA changed quickly. SSA retrieved from measurements with CORAS and MODIS agree with the in situ observations within the ranges given by the measurement uncertainties. However, SSA retrieved from the airborne SMART data slightly underestimated the ground-based results.
NASA Astrophysics Data System (ADS)
Shamkhali Chenar, S.; Deng, Z.
2017-12-01
Pathogenic viruses pose a significant public health threat and economic losses to shellfish industry in the coastal environment. Norovirus is a contagious virus and the leading cause of epidemic gastroenteritis following consumption of oysters harvested from sewage-contaminated waters. While it is challenging to detect noroviruses in coastal waters due to the lack of sensitive and routine diagnostic methods, machine learning techniques are allowing us to prevent or at least reduce the risks by developing effective predictive models. This study attempts to develop an algorithm between historical norovirus outbreak reports and environmental parameters including water temperature, solar radiation, water level, salinity, precipitation, and wind. For this purpose, the Random Forests statistical technique was utilized to select relevant environmental parameters and their various combinations with different time lags controlling the virus distribution in oyster harvesting areas along the Louisiana Coast. An Artificial Neural Networks (ANN) approach was then presented to predict the outbreaks using a final set of input variables. Finally, a sensitivity analysis was conducted to evaluate relative importance and contribution of the input variables to the model output. Findings demonstrated that the developed model was capable of reproducing historical oyster norovirus outbreaks along the Louisiana Coast with the overall accuracy of than 99.83%, demonstrating the efficacy of the model. Moreover, the increase in water temperature, solar radiation, water level, and salinity, and the decrease in wind and rainfall are associated with the reduction in the model-predicted risk of norovirus outbreak according to sensitivity analysis results. In conclusion, the presented machine learning approach provided reliable tools for predicting potential norovirus outbreaks and could be used for early detection of possible outbreaks and reduce the risk of norovirus to public health and the seafood industry.
An Overview of the Naval Research Laboratory Ocean Surface Flux (NFLUX) System
NASA Astrophysics Data System (ADS)
May, J. C.; Rowley, C. D.; Barron, C. N.
2016-02-01
The Naval Research Laboratory (NRL) ocean surface flux (NFLUX) system is an end-to-end data processing and assimilation system used to provide near-real time satellite-based surface heat flux fields over the global ocean. Swath-level air temperature (TA), specific humidity (QA), and wind speed (WS) estimates are produced using multiple polynomial regression algorithms with inputs from satellite sensor data records from the Special Sensor Microwave Imager/Sounder, the Advanced Microwave Sounding Unit-A, the Advanced Technology Microwave Sounder, and the Advanced Microwave Scanning Radiometer-2 sensors. Swath-level WS estimates are also retrieved from satellite environmental data records from WindSat, the MetOp scatterometers, and the Oceansat scatterometer. Swath-level solar and longwave radiative flux estimates are produced utilizing the Rapid Radiative Transfer Model for Global Circulation Models (RRTMG). Primary inputs to the RRTMG include temperature and moisture profiles and cloud liquid and ice water paths from the Microwave Integrated Retrieval System. All swath-level satellite estimates undergo an automated quality control process and are then assimilated with atmospheric model forecasts to produce 3-hourly gridded analysis fields. The turbulent heat flux fields, latent and sensible heat flux, are determined from the Coupled Ocean-Atmosphere Response Experiment (COARE) 3.0 bulk algorithms using inputs of TA, QA, WS, and a sea surface temperature model field. Quality-controlled in situ observations over a one-year time period from May 2013 through April 2014 form the reference for validating ocean surface state parameter and heat flux fields. The NFLUX fields are evaluated alongside the Navy's operational global atmospheric model, the Navy Global Environmental Model (NAVGEM). NFLUX is shown to have smaller biases and lower or similar root mean square errors compared to NAVGEM.
NASA Technical Reports Server (NTRS)
Toon, Owen B.; Mckay, C. P.; Ackerman, T. P.; Santhanam, K.
1989-01-01
The solution of the generalized two-stream approximation for radiative transfer in homogeneous multiple scattering atmospheres is extended to vertically inhomogeneous atmospheres in a manner which is numerically stable and computationally efficient. It is shown that solar energy deposition rates, photolysis rates, and infrared cooling rates all may be calculated with the simple modifications of a single algorithm. The accuracy of the algorithm is generally better than 10 percent, so that other uncertainties, such as in absorption coefficients, may often dominate the error in calculation of the quantities of interest to atmospheric studies.
NASA Technical Reports Server (NTRS)
Kyle, H. L.; House, F. B.; Ardanuy, P. E.; Jacobowitz, H.; Maschhoff, R. H.; Hickey, J. R.
1984-01-01
In-flight calibration adjustments are developed to process data obtained from the wide-field-of-view channels of Nimbus-6 and Nimbus-7 after the failure of the Nimbus-7 longwave scanner on June 22, 1980. The sensor characteristics are investigated; the satellite environment is examined in detail; and algorithms are constructed to correct for long-term sensor-response changes, on/off-cycle thermal transients, and filter-dome absorption of longwave radiation. Data and results are presented in graphs and tables, including comparisons of the old and new algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Hsin-Chen; Tan, Jun; Dolly, Steven
2015-02-15
Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy basedmore » on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets were separately employed to test the effectiveness of the proposed contouring error detection strategy. Results: An evaluation tool was implemented to illustrate how the proposed strategy automatically detects the radiation therapy contouring errors for a given patient and provides 3D graphical visualization of error detection results as well. The contouring error detection results were achieved with an average sensitivity of 0.954/0.906 and an average specificity of 0.901/0.909 on the centroid/volume related contouring errors of all the tested samples. As for the detection results on structural shape related contouring errors, an average sensitivity of 0.816 and an average specificity of 0.94 on all the tested samples were obtained. The promising results indicated the feasibility of the proposed strategy for the detection of contouring errors with low false detection rate. Conclusions: The proposed strategy can reliably identify contouring errors based upon inter- and intrastructural constraints derived from clinically approved contours. It holds great potential for improving the radiation therapy workflow. ROC and box plot analyses allow for analytically tuning of the system parameters to satisfy clinical requirements. Future work will focus on the improvement of strategy reliability by utilizing more training sets and additional geometric attribute constraints.« less
A joint tracking method for NSCC based on WLS algorithm
NASA Astrophysics Data System (ADS)
Luo, Ruidan; Xu, Ying; Yuan, Hong
2017-12-01
Navigation signal based on compound carrier (NSCC), has the flexible multi-carrier scheme and various scheme parameters configuration, which enables it to possess significant efficiency of navigation augmentation in terms of spectral efficiency, tracking accuracy, multipath mitigation capability and anti-jamming reduction compared with legacy navigation signals. Meanwhile, the typical scheme characteristics can provide auxiliary information for signal synchronism algorithm design. This paper, based on the characteristics of NSCC, proposed a kind of joint tracking method utilizing Weighted Least Square (WLS) algorithm. In this method, the LS algorithm is employed to jointly estimate each sub-carrier frequency shift with the frequency-Doppler linear relationship, by utilizing the known sub-carrier frequency. Besides, the weighting matrix is set adaptively according to the sub-carrier power to ensure the estimation accuracy. Both the theory analysis and simulation results illustrate that the tracking accuracy and sensitivity of this method outperforms the single-carrier algorithm with lower SNR.
Remote Sensing of Cloud Properties using Ground-based Measurements of Zenith Radiance
NASA Technical Reports Server (NTRS)
Chiu, J. Christine; Marshak, Alexander; Knyazikhin, Yuri; Wiscombe, Warren J.; Barker, Howard W.; Barnard, James C.; Luo, Yi
2006-01-01
An extensive verification of cloud property retrievals has been conducted for two algorithms using zenith radiances measured by the Atmospheric Radiation Measurement (ARM) Program ground-based passive two-channel (673 and 870 nm) Narrow Field-Of-View Radiometer. The underlying principle of these algorithms is that clouds have nearly identical optical properties at these wavelengths, but corresponding spectral surface reflectances (for vegetated surfaces) differ significantly. The first algorithm, the RED vs. NIR, works for a fully three-dimensional cloud situation. It retrieves not only cloud optical depth, but also an effective radiative cloud fraction. Importantly, due to one-second time resolution of radiance measurements, we are able, for the first time, to capture detailed changes in cloud structure at the natural time scale of cloud evolution. The cloud optical depths tau retrieved by this algorithm are comparable to those inferred from both downward fluxes in overcast situations and microwave brightness temperatures for broken clouds. Moreover, it can retrieve tau for thin patchy clouds, where flux and microwave observations fail to detect them. The second algorithm, referred to as COUPLED, couples zenith radiances with simultaneous fluxes to infer 2. In general, the COUPLED and RED vs. NIR algorithms retrieve consistent values of tau. However, the COUPLED algorithm is more sensitive to the accuracies of measured radiance, flux, and surface reflectance than the RED vs. NIR algorithm. This is especially true for thick overcast clouds where it may substantially overestimate z.
Thoracoabdominal Computed Tomography in Trauma Patients: A Cost-Consequences Analysis
van Vugt, Raoul; Kool, Digna R.; Brink, Monique; Dekker, Helena M.; Deunk, Jaap; Edwards, Michael J.
2014-01-01
Background: CT is increasingly used during the initial evaluation of blunt trauma patients. In this era of increasing cost-awareness, the pros and cons of CT have to be assessed. Objectives: This study was performed to evaluate cost-consequences of different diagnostic algorithms that use thoracoabdominal CT in primary evaluation of adult patients with high-energy blunt trauma. Materials and Methods: We compared three different algorithms in which CT was applied as an immediate diagnostic tool (rush CT), a diagnostic tool after limited conventional work-up (routine CT), and a selective tool (selective CT). Probabilities of detecting and missing clinically relevant injuries were retrospectively derived. We collected data on radiation exposure and performed a micro-cost analysis on a reference case-based approach. Results: Both rush and routine CT detected all thoracoabdominal injuries in 99.1% of the patients during primary evaluation (n = 1040). Selective CT missed one or more diagnoses in 11% of the patients in which a change of treatment was necessary in 4.8%. Rush CT algorithm costed € 2676 (US$ 3660) per patient with a mean radiation dose of 26.40 mSv per patient. Routine CT costed € 2815 (US$ 3850) and resulted in the same radiation exposure. Selective CT resulted in less radiation dose (23.23 mSv) and costed € 2771 (US$ 3790). Conclusions: Rush CT seems to result in the least costs and is comparable in terms of radiation dose exposure and diagnostic certainty with routine CT after a limited conventional work-up. However, selective CT results in less radiation dose exposure but a slightly higher cost and less certainty. PMID:25337521
FPGA-based GEM detector signal acquisition for SXR spectroscopy system
NASA Astrophysics Data System (ADS)
Wojenski, A.; Pozniak, K. T.; Kasprowicz, G.; Kolasinski, P.; Krawczyk, R.; Zabolotny, W.; Chernyshova, M.; Czarski, T.; Malinowski, K.
2016-11-01
The presented work is related to the Gas Electron Multiplier (GEM) detector soft X-ray spectroscopy system for tokamak applications. The used GEM detector has one-dimensional, 128 channel readout structure. The channels are connected to the radiation-hard electronics with configurable analog stage and fast ADCs, supporting speeds of 125 MSPS for each channel. The digitalized data is sent directly to the FPGAs using fast serial links. The preprocessing algorithms are implemented in the FPGAs, with the data buffering made in the on-board 2Gb DDR3 memory chips. After the algorithmic stage, the data is sent to the Intel Xeon-based PC for further postprocessing using PCI-Express link Gen 2. For connection of multiple FPGAs, PCI-Express switch 8-to-1 was designed. The whole system can support up to 2048 analog channels. The scope of the work is an FPGA-based implementation of the recorder of the raw signal from GEM detector. Since the system will work in a very challenging environment (neutron radiation, intense electro-magnetic fields), the registered signals from the GEM detector can be corrupted. In the case of the very intense hot plasma radiation (e.g. laser generated plasma), the registered signals can overlap. Therefore, it is valuable to register the raw signals from the GEM detector with high number of events during soft X-ray radiation. The signal analysis will have the direct impact on the implementation of photon energy computation algorithms. As the result, the system will produce energy spectra and topological distribution of soft X-ray radiation. The advanced software was developed in order to perform complex system startup and monitoring of hardware units. Using the array of two one-dimensional GEM detectors it will be possible to perform tomographic reconstruction of plasma impurities radiation in the SXR region.
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
NASA Astrophysics Data System (ADS)
Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin
2009-10-01
Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.
NASA Astrophysics Data System (ADS)
Lonski, P.; Taylor, M. L.; Hackworth, W.; Phipps, A.; Franich, R. D.; Kron, T.
2014-03-01
Different treatment planning system (TPS) algorithms calculate radiation dose in different ways. This work compares measurements made in vivo to the dose calculated at out-of-field locations using three different commercially available algorithms in the Eclipse treatment planning system. LiF: Mg, Cu, P thermoluminescent dosimeter (TLD) chips were placed with 1 cm build-up at six locations on the contralateral side of 5 patients undergoing radiotherapy for breast cancer. TLD readings were compared to calculations of Pencil Beam Convolution (PBC), Anisotropic Analytical Algorithm (AAA) and Acuros XB (XB). AAA predicted zero dose at points beyond 16 cm from the field edge. In the same region PBC returned an unrealistically constant result independent of distance and XB showed good agreement to measured data although consistently underestimated by ~0.1 % of the prescription dose. At points closer to the field edge XB was the superior algorithm, exhibiting agreement with TLD results to within 15 % of measured dose. Both AAA and PBC showed mixed agreement, with overall discrepancies considerably greater than XB. While XB is certainly the preferable algorithm, it should be noted that TPS algorithms in general are not designed to calculate dose at peripheral locations and calculation results in such regions should be treated with caution.
NASA Technical Reports Server (NTRS)
Yang, Wenze; Huang, Dong; Tan, Bin; Stroeve, Julienne C.; Shabanov, Nikolay V.; Knyazikhin, Yuri; Nemani, Ramakrishna R.; Myneni, Ranga B.
2006-01-01
The analysis of two years of Collection 3 and five years of Collection 4 Terra Moderate Resolution Imaging Spectroradiometer (MODIS) Leaf Area Index (LAI) and Fraction of Photosynthetically Active Radiation (FPAR) data sets is presented in this article with the goal of understanding product quality with respect to version (Collection 3 versus 4), algorithm (main versus backup), snow (snow-free versus snow on the ground), and cloud (cloud-free versus cloudy) conditions. Retrievals from the main radiative transfer algorithm increased from 55% in Collection 3 to 67% in Collection 4 due to algorithm refinements and improved inputs. Anomalously high LAI/FPAR values observed in Collection 3 product in some vegetation types were corrected in Collection 4. The problem of reflectance saturation and too few main algorithm retrievals in broadleaf forests persisted in Collection 4. The spurious seasonality in needleleaf LAI/FPAR fields was traced to fewer reliable input data and retrievals during the boreal winter period. About 97% of the snow covered pixels were processed by the backup Normalized Difference Vegetation Index-based algorithm. Similarly, a majority of retrievals under cloudy conditions were obtained from the backup algorithm. For these reasons, the users are advised to consult the quality flags accompanying the LAI and FPAR product.
Evaluating cloud retrieval algorithms with the ARM BBHRP framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mlawer,E.; Dunn,M.; Mlawer, E.
2008-03-10
Climate and weather prediction models require accurate calculations of vertical profiles of radiative heating. Although heating rate calculations cannot be directly validated due to the lack of corresponding observations, surface and top-of-atmosphere measurements can indirectly establish the quality of computed heating rates through validation of the calculated irradiances at the atmospheric boundaries. The ARM Broadband Heating Rate Profile (BBHRP) project, a collaboration of all the working groups in the program, was designed with these heating rate validations as a key objective. Given the large dependence of radiative heating rates on cloud properties, a critical component of BBHRP radiative closure analysesmore » has been the evaluation of cloud microphysical retrieval algorithms. This evaluation is an important step in establishing the necessary confidence in the continuous profiles of computed radiative heating rates produced by BBHRP at the ARM Climate Research Facility (ACRF) sites that are needed for modeling studies. This poster details the continued effort to evaluate cloud property retrieval algorithms within the BBHRP framework, a key focus of the project this year. A requirement for the computation of accurate heating rate profiles is a robust cloud microphysical product that captures the occurrence, height, and phase of clouds above each ACRF site. Various approaches to retrieve the microphysical properties of liquid, ice, and mixed-phase clouds have been processed in BBHRP for the ACRF Southern Great Plains (SGP) and the North Slope of Alaska (NSA) sites. These retrieval methods span a range of assumptions concerning the parameterization of cloud location, particle density, size, shape, and involve different measurement sources. We will present the radiative closure results from several different retrieval approaches for the SGP site, including those from Microbase, the current 'reference' retrieval approach in BBHRP. At the NSA, mixed-phase clouds and cloud with a low optical depth are prevalent; the radiative closure studies using Microbase demonstrated significant residuals. As an alternative to Microbase at NSA, the Shupe-Turner cloud property retrieval algorithm, aimed at improving the partitioning of cloud phase and incorporating more constrained, conditional microphysics retrievals, also has been evaluated using the BBHRP data set.« less
Performance-Based Seismic Design of Steel Frames Utilizing Colliding Bodies Algorithm
Veladi, H.
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm. PMID:25202717
Performance-based seismic design of steel frames utilizing colliding bodies algorithm.
Veladi, H
2014-01-01
A pushover analysis method based on semirigid connection concept is developed and the colliding bodies optimization algorithm is employed to find optimum seismic design of frame structures. Two numerical examples from the literature are studied. The results of the new algorithm are compared to the conventional design methods to show the power or weakness of the algorithm.
Achievement of radiative feedback control for long-pulse operation on EAST
NASA Astrophysics Data System (ADS)
Wu, K.; Yuan, Q. P.; Xiao, B. J.; Wang, L.; Duan, Y. M.; Chen, J. B.; Zheng, X. W.; Liu, X. J.; Zhang, B.; Xu, J. C.; Luo, Z. P.; Zang, Q.; Li, Y. Y.; Feng, W.; Wu, J. H.; Yang, Z. S.; Zhang, L.; Luo, G.-N.; Gong, X. Z.; Hu, L. Q.; Hu, J. S.; Li, J.
2018-05-01
The active feedback control of radiated power to prevent divertor target plates overheating during long-pulse operation has been developed and implemented on EAST. The radiation control algorithm, with impurity seeding via a supersonic molecular beam injection (SMBI) system, has shown great success in both reliability and stability. By seeding a sequence of short neon (Ne) impurity pulses with the SMBI from the outer mid-plane, the radiated power of the bulk plasma can be well controlled, and the duration of radiative control (feedforward and feedback) is 4.5 s during a discharge of 10 s. Reliable control of the total radiated power of bulk plasma has been successfully achieved in long-pulse upper single null (USN) discharges with a tungsten divertor. The achieved control range of {{f}rad} is 20%–30% in L-mode regimes and 18%–36% in H-mode regimes. The temperature of the divertor target plates was maintained at a low level during the radiative control phase. The peak particle flux on the divertor target was decreased by feedforward Ne injection in the L-mode discharges, while the Ne pulses from the SMBI had no influence on the peak particle flux because of the very small injecting volume. It is shown that although the radiated power increased, no serious reduction of plasma-stored energy or confinement was observed during the control phase. The success of the radiation control algorithm and current experiments in radiated power control represents a significant advance for steady-state divertor radiation and heat flux control on EAST for near-future long-pulse operation.
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Mugnai, Alberto; Cooper, Harry J.; Tripoli, Gregory J.; Xiang, Xuwu
1992-01-01
The relationship between emerging microwave brightness temperatures (T(B)s) and vertically distributed mixtures of liquid and frozen hydrometeors was investigated, using a cloud-radiation model, in order to establish the framework for a hybrid statistical-physical rainfall retrieval algorithm. Although strong relationships were found between the T(B) values and various rain parameters, these correlations are misleading in that the T(B)s are largely controlled by fluctuations in the ice-particle mixing ratios, which in turn are highly correlated to fluctuations in liquid-particle mixing ratios. However, the empirically based T(B)-rain-rate (T(B)-RR) algorithms can still be used as tools for estimating precipitation if the hydrometeor profiles used for T(B)-RR algorithms are not specified in an ad hoc fashion.
A radiation scalar for numerical relativity.
Beetle, Christopher; Burko, Lior M
2002-12-30
This Letter describes a scalar curvature invariant for general relativity with a certain, distinctive feature. While many such invariants exist, this one vanishes in regions of space-time which can be said unambiguously to contain no gravitational radiation. In more general regions which incontrovertibly support nontrivial radiation fields, it can be used to extract local, coordinate-independent information partially characterizing that radiation. While a clear, physical interpretation is possible only in such radiation zones, a simple algorithm can be given to extend the definition smoothly to generic regions of space-time.
Jafarzadeh, Naser; Mani-Varnosfaderani, Ahmad; Gilany, Kambiz; Eynali, Samira; Ghaznavi, Habib; Shakeri-Zadeh, Ali
2018-03-01
Radiotherapy is one of the main modalities of cancer treatment. The utility of Raman spectroscopy (RS) for detecting the distinct radiobiological responses in human cancer cells is currently under investigation. RS holds great promises to provide good opportunities for personalizing radiotherapy treatments. Here, we report the effects of the radiation dose and post-irradiation time on the molecular changes in the human breast cancer SKBR3 cells, using RS. The SKBR3 cells were irradiated by gamma radiation with different doses of 0, 1, 2, 4, and 6 Gy. The Raman signals were acquired 24 and 48 h after the gamma radiation. The collected Raman spectra were analyzed by different statistical methods such as principal component analysis, linear discriminant analysis, and genetic algorithm. A thorough analysis of the obtained Raman signals revealed that 2 Gy of gamma radiation induces remarkable molecular and structural changes in the SKBR3 cells. We found that the wavenumbers in the range of 1000-1400 cm -1 in Raman spectra are selective for discriminating between the effects of the different doses of irradiation. The results also revealed that longer post-irradiation time leads to the relaxation of the cells to their initial state. The molecular changes that occurred in the 2Gy samples were mostly reversible. On the other hand, the exposure to doses higher than 4Gy induced serious irreversible changes, mainly seen in 2700-2800 cm -1 in Raman spectra. The classification models developed in this study would help to predict the radiation-based molecular changes induced in the cancer cells by only using RS. Also, this designed framework may facilitate the process of biodosimetry. Copyright © 2018 Elsevier B.V. All rights reserved.
Lyapustin, Alexei
2002-09-20
Results of an extensive validation study of the new radiative transfer code SHARM-3D are described. The code is designed for modeling of unpolarized monochromatic radiative transfer in the visible and near-IR spectra in the laterally uniform atmosphere over an arbitrarily inhomogeneous anisotropic surface. The surface boundary condition is periodic. The algorithm is based on an exact solution derived with the Green's function method. Several parameterizations were introduced into the algorithm to achieve superior performance. As a result, SHARM-3D is 2-3 orders of magnitude faster than the rigorous code SHDOM. It can model radiances over large surface scenes for a number of incidence-view geometries simultaneously. Extensive comparisons against SHDOM indicate that SHARM-3D has an average accuracy of better than 1%, which along with the high speed of calculations makes it a unique tool for remote-sensing applications in land surface and related atmospheric radiation studies.
NASA Astrophysics Data System (ADS)
Lyapustin, Alexei
2002-09-01
Results of an extensive validation study of the new radiative transfer code SHARM-3D are described. The code is designed for modeling of unpolarized monochromatic radiative transfer in the visible and near-IR spectra in the laterally uniform atmosphere over an arbitrarily inhomogeneous anisotropic surface. The surface boundary condition is periodic. The algorithm is based on an exact solution derived with the Green ’s function method. Several parameterizations were introduced into the algorithm to achieve superior performance. As a result, SHARM-3D is 2 -3 orders of magnitude faster than the rigorous code SHDOM. It can model radiances over large surface scenes for a number of incidence-view geometries simultaneously. Extensive comparisons against SHDOM indicate that SHARM-3D has an average accuracy of better than 1%, which along with the high speed of calculations makes it a unique tool for remote-sensing applications in land surface and related atmospheric radiation studies.
Computation of diffuse sky irradiance from multidirectional radiance measurements
NASA Technical Reports Server (NTRS)
Ahmad, Suraiya P.; Middleton, Elizabeth M.; Deering, Donald W.
1987-01-01
Accurate determination of the diffuse solar spectral irradiance directly above the land surface is important in characterizing the reflectance properties of these surfaces, especially vegetation canopies. This determination is also needed to infer the net radiation budget of the earth-atmosphere system above these surfaces. An algorithm is developed here for the computation of hemispheric diffuse irradiance using the measurements from an instrument called PARABOLA, which rapidly measures upwelling and downwelling radiances in three selected wavelength bands. The validity of the algorithm is established from simulations. The standard reference data set of diffuse radiances of Dave (1978), obtained by solving the radiative transfer equation numerically for realistic atmospheric models, is used to simulate PARABOLA radiances. Hemispheric diffuse irradiance is estimated from a subset of simulated radiances by using the algorithm described. The algorithm is validated by comparing the estimated diffuse irradiance with the true diffuse irradiance of the standard data set. The validations include sensitivity studies for two wavelength bands (visible, 0.65-0.67 micron; near infrared, 0.81-0.84 micron), different atmospheric conditions, solar elevations, and surface reflectances. In most cases the hemispheric diffuse irradiance computed from simulated PARABOLA radiances and the true irradiance obtained from radiative transfer calculations agree within 1-2 percent. This technique can be applied to other sampling instruments designed to estimate hemispheric diffuse sky irradiance.
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
Point coordinates extraction from localized hyperbolic reflections in GPR data
NASA Astrophysics Data System (ADS)
Ristić, Aleksandar; Bugarinović, Željko; Vrtunski, Milan; Govedarica, Miro
2017-09-01
In this paper, we propose an automated detection algorithm for the localization of apexes and points on the prongs of hyperbolic reflection incurred as a result of GPR scanning technology. The objects of interest encompass cylindrical underground utilities that have a distinctive form of hyperbolic reflection in radargram. Algorithm involves application of trained neural network to analyze radargram in the form of raster image, resulting with extracted segments of interest that contain hyperbolic reflections. This significantly reduces the amount of data for further analysis. Extracted segments represent the zone for localization of apices. This is followed by extraction of points on prongs of hyperbolic reflections which is carried out until stopping criterion is satisfied, regardless the borders of segment of interest. In final step a classification of false hyperbolic reflections caused by the constructive interference and their removal is done. The algorithm is implemented in MATLAB environment. There are several advantages of the proposed algorithm. It can successfully recognize true hyperbolic reflections in radargram images and extracts coordinates, with very low rate of false detections and without prior knowledge about the number of hyperbolic reflections or buried utilities. It can be applied to radargrams containing single and multiple hyperbolic reflections, intersected, distorted, as well as incomplete (asymmetric) hyperbolic reflections, all in the presence of higher level of noise. Special feature of algorithm is developed procedure for analysis and removal of false hyperbolic reflections generated by the constructive interference from reflectors associated with the utilities. Algorithm was tested on a number of synthetic and radargram acquired in the field survey. To illustrate the performances of the proposed algorithm, we present the characteristics of the algorithm through five representative radargrams obtained in real conditions. In these examples we present different acquisition scenarios by varying the number of buried objects, their disposition, size, and level of noise. Example with highest complexity was tested also as a synthetic radargram generated by gprMax. Processing time in examples with one or two hyperbolic reflections is from 0.1 to 0.3 s, while for the most complex examples it is from 2.2 to 4.9 s. In general, the obtained experimental results show that the proposed algorithm exhibits promising performances both in terms of utility detection and processing speed of the algorithm.
Implementation of a partitioned algorithm for simulation of large CSI problems
NASA Technical Reports Server (NTRS)
Alvin, Kenneth F.; Park, K. C.
1991-01-01
The implementation of a partitioned numerical algorithm for determining the dynamic response of coupled structure/controller/estimator finite-dimensional systems is reviewed. The partitioned approach leads to a set of coupled first and second-order linear differential equations which are numerically integrated with extrapolation and implicit step methods. The present software implementation, ACSIS, utilizes parallel processing techniques at various levels to optimize performance on a shared-memory concurrent/vector processing system. A general procedure for the design of controller and filter gains is also implemented, which utilizes the vibration characteristics of the structure to be solved. Also presented are: example problems; a user's guide to the software; the procedures and algorithm scripts; a stability analysis for the algorithm; and the source code for the parallel implementation.
NASA Astrophysics Data System (ADS)
Lim, H.; Choi, M.; Kim, J.; Go, S.; Chan, P.; Kasai, Y.
2017-12-01
This study attempts to retrieve the aerosol optical properties (AOPs) based on the spectral matching method, with using three visible and one near infrared channels (470, 510, 640, 860nm). This method requires the preparation of look-up table (LUT) approach based on the radiative transfer modeling. Cloud detection is one of the most important processes for guaranteed quality of AOPs. Since the AHI has several infrared channels, which are very advantageous for cloud detection, clouds can be removed by using brightness temperature difference (BTD) and spatial variability test. The Yonsei Aerosol Retrieval (YAER) algorithm is basically utilized on a dark surface, therefore a bright surface (e.g., desert, snow) should be removed first. Then we consider the characteristics of the reflectance of land and ocean surface using three visible channels. The known surface reflectivity problem in high latitude area can be solved in this algorithm by selecting appropriate channels through improving tests. On the other hand, we retrieved the AOPs by obtaining the visible surface reflectance using NIR to normalized difference vegetation index short wave infrared (NDVIswir) relationship. ESR tends to underestimate urban and cropland area, we improved the visible surface reflectance considering urban effect. In this version, ocean surface reflectance is using the new cox and munk method which considers ocean bidirectional reflectance distribution function (BRDF). Input of this method has wind speed, chlorophyll, salinity and so on. Based on validation results with the sun-photometer measurement in AErosol Robotic NETwork (AERONET), we confirm that the quality of Aerosol Optical Depth (AOD) from the YAER algorithm is comparable to the product from the Japan Aerospace Exploration Agency (JAXA) retrieval algorithm. Our future update includes a consideration of improvement land surface reflectance by hybrid approach, and non-spherical aerosols. This will improve the quality of YAER algorithm more, particularly retrieval for the dust particle over the bright surface in East Asia.
An Improved Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.
2000-01-01
A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.
Chen, G.; Chacón, L.
2015-08-11
For decades, the Vlasov–Darwin model has been recognized to be attractive for particle-in-cell (PIC) kinetic plasma simulations in non-radiative electromagnetic regimes, to avoid radiative noise issues and gain computational efficiency. However, the Darwin model results in an elliptic set of field equations that renders conventional explicit time integration unconditionally unstable. We explore a fully implicit PIC algorithm for the Vlasov–Darwin model in multiple dimensions, which overcomes many difficulties of traditional semi-implicit Darwin PIC algorithms. The finite-difference scheme for Darwin field equations and particle equations of motion is space–time-centered, employing particle sub-cycling and orbit-averaging. This algorithm conserves total energy, local charge,more » canonical-momentum in the ignorable direction, and preserves the Coulomb gauge exactly. An asymptotically well-posed fluid preconditioner allows efficient use of large cell sizes, which are determined by accuracy considerations, not stability, and can be orders of magnitude larger than required in a standard explicit electromagnetic PIC simulation. Finally, we demonstrate the accuracy and efficiency properties of the algorithm with various numerical experiments in 2D–3V.« less
A real-time photo-realistic rendering algorithm of ocean color based on bio-optical model
NASA Astrophysics Data System (ADS)
Ma, Chunyong; Xu, Shu; Wang, Hongsong; Tian, Fenglin; Chen, Ge
2016-12-01
A real-time photo-realistic rendering algorithm of ocean color is introduced in the paper, which considers the impact of ocean bio-optical model. The ocean bio-optical model mainly involves the phytoplankton, colored dissolved organic material (CDOM), inorganic suspended particle, etc., which have different contributions to absorption and scattering of light. We decompose the emergent light of the ocean surface into the reflected light from the sun and the sky, and the subsurface scattering light. We establish an ocean surface transmission model based on ocean bidirectional reflectance distribution function (BRDF) and the Fresnel law, and this model's outputs would be the incident light parameters of subsurface scattering. Using ocean subsurface scattering algorithm combined with bio-optical model, we compute the scattering light emergent radiation in different directions. Then, we blend the reflection of sunlight and sky light to implement the real-time ocean color rendering in graphics processing unit (GPU). Finally, we use two kinds of radiance reflectance calculated by Hydrolight radiative transfer model and our algorithm to validate the physical reality of our method, and the results show that our algorithm can achieve real-time highly realistic ocean color scenes.
Iterative Track Fitting Using Cluster Classification in Multi Wire Proportional Chamber
NASA Astrophysics Data System (ADS)
Primor, David; Mikenberg, Giora; Etzion, Erez; Messer, Hagit
2007-10-01
This paper addresses the problem of track fitting of a charged particle in a multi wire proportional chamber (MWPC) using cathode readout strips. When a charged particle crosses a MWPC, a positive charge is induced on a cluster of adjacent strips. In the presence of high radiation background, the cluster charge measurements may be contaminated due to background particles, leading to less accurate hit position estimation. The least squares method for track fitting assumes the same position error distribution for all hits and thus loses its optimal properties on contaminated data. For this reason, a new robust algorithm is proposed. The algorithm first uses the known spatial charge distribution caused by a single charged particle over the strips, and classifies the clusters into ldquocleanrdquo and ldquodirtyrdquo clusters. Then, using the classification results, it performs an iterative weighted least squares fitting procedure, updating its optimal weights each iteration. The performance of the suggested algorithm is compared to other track fitting techniques using a simulation of tracks with radiation background. It is shown that the algorithm improves the track fitting performance significantly. A practical implementation of the algorithm is presented for muon track fitting in the cathode strip chamber (CSC) of the ATLAS experiment.
Gross domestic product estimation based on electricity utilization by artificial neural network
NASA Astrophysics Data System (ADS)
Stevanović, Mirjana; Vujičić, Slađana; Gajić, Aleksandar M.
2018-01-01
The main goal of the paper was to estimate gross domestic product (GDP) based on electricity estimation by artificial neural network (ANN). The electricity utilization was analyzed based on different sources like renewable, coal and nuclear sources. The ANN network was trained with two training algorithms namely extreme learning method and back-propagation algorithm in order to produce the best prediction results of the GDP. According to the results it can be concluded that the ANN model with extreme learning method could produce the acceptable prediction of the GDP based on the electricity utilization.
Induction as Knowledge Integration
NASA Technical Reports Server (NTRS)
Smith, Benjamin D.; Rosenbloom, Paul S.
1996-01-01
Two key issues for induction algorithms are the accuracy of the learned hypothesis and the computational resources consumed in inducing that hypothesis. One of the most promising ways to improve performance along both dimensions is to make use of additional knowledge. Multi-strategy learning algorithms tackle this problem by employing several strategies for handling different kinds of knowledge in different ways. However, integrating knowledge into an induction algorithm can be difficult when the new knowledge differs significantly from the knowledge the algorithm already uses. In many cases the algorithm must be rewritten. This paper presents Knowledge Integration framework for Induction (KII), a KII, that provides a uniform mechanism for integrating knowledge into induction. In theory, arbitrary knowledge can be integrated with this mechanism, but in practice the knowledge representation language determines both the knowledge that can be integrated, and the costs of integration and induction. By instantiating KII with various set representations, algorithms can be generated at different trade-off points along these dimensions. One instantiation of KII, called RS-KII, is presented that can implement hybrid induction algorithms, depending on which knowledge it utilizes. RS-KII is demonstrated to implement AQ-11, as well as a hybrid algorithm that utilizes a domain theory and noisy examples. Other algorithms are also possible.
NASA Astrophysics Data System (ADS)
Wei, Qingyang; Ma, Tianyu; Xu, Tianpeng; Zeng, Ming; Gu, Yu; Dai, Tiantian; Liu, Yaqiang
2018-01-01
Modern positron emission tomography (PET) detectors are made from pixelated scintillation crystal arrays and readout by Anger logic. The interaction position of the gamma-ray should be assigned to a crystal using a crystal position map or look-up table. Crystal identification is a critical procedure for pixelated PET systems. In this paper, we propose a novel crystal identification method for a dual-layer-offset LYSO based animal PET system via Lu-176 background radiation and mean shift algorithm. Single photon event data of the Lu-176 background radiation are acquired in list-mode for 3 h to generate a single photon flood map (SPFM). Coincidence events are obtained from the same data using time information to generate a coincidence flood map (CFM). The CFM is used to identify the peaks of the inner layer using the mean shift algorithm. The response of the inner layer is deducted from the SPFM by subtracting CFM. Then, the peaks of the outer layer are also identified using the mean shift algorithm. The automatically identified peaks are manually inspected by a graphical user interface program. Finally, a crystal position map is generated using a distance criterion based on these peaks. The proposed method is verified on the animal PET system with 48 detector blocks on a laptop with an Intel i7-5500U processor. The total runtime for whole system peak identification is 67.9 s. Results show that the automatic crystal identification has 99.98% and 99.09% accuracy for the peaks of the inner and outer layers of the whole system respectively. In conclusion, the proposed method is suitable for the dual-layer-offset lutetium based PET system to perform crystal identification instead of external radiation sources.
Multispectral fluorescence image algorithms for detection of frass on mature tomatoes
USDA-ARS?s Scientific Manuscript database
A multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at five wavebands, 515 nm, 640 nm, 664 nm, 690 nm, and 724 nm...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-24
..., as Modified by Amendment No. 1 Thereto, Related to the Hybrid Matching Algorithms June 17, 2010. On... Hybrid System. Each rule currently provides allocation algorithms the Exchange can utilize when executing incoming electronic orders, including the Ultimate Matching Algorithm (``UMA''), and price-time and pro...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alamaniotis, Miltiadis; Tsoukalas, Lefteri H.
2018-01-01
Significant role in enhancing nuclear nonproliferation plays the analysis of obtained data and the inference of the presence or not of special nuclear materials in them. Among various types of measurements, gamma-ray spectra is the widest used type of data utilized for analysis in nonproliferation. In this chapter, a method that employs the fireworks algorithm (FWA) for analyzing gamma-ray spectra aiming at detecting gamma signatures is presented. In particular FWA is utilized to fit a set of known signatures to a measured spectrum by optimizing an objective function, with non-zero coefficients expressing the detected signatures. FWA is tested on amore » set of experimentally obtained measurements and various objective functions -MSE, RMSE, Theil-2, MAE, MAPE, MAP- with results exhibiting its potential in providing high accuracy and high precision of detected signatures. Furthermore, FWA is benchmarked against genetic algorithms, and multiple linear regression with results exhibiting its superiority over the rest tested algorithms with respect to precision for MAE, MAPE and MAP measures.« less
Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting
2015-01-01
Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption. PMID:26131666
Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting
2015-06-26
Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption.
Calculation of the angular radiance distribution for a coupled atmosphere and canopy
NASA Technical Reports Server (NTRS)
Liang, Shunlin; Strahler, Alan H.
1993-01-01
The radiative transfer equations for a coupled atmosphere and canopy are solved numerically by an improved Gauss-Seidel iteration algorithm. The radiation field is decomposed into three components: unscattered sunlight, single scattering, and multiple scattering radiance for which the corresponding equations and boundary conditions are set up and their analytical or iterational solutions are explicitly derived. The classic Gauss-Seidel algorithm has been widely applied in atmospheric research. This is its first application for calculating the multiple scattering radiance of a coupled atmosphere and canopy. This algorithm enables us to obtain the internal radiation field as well as radiances at boundaries. Any form of bidirectional reflectance distribution function (BRDF) as a boundary condition can be easily incorporated into the iteration procedure. The hotspot effect of the canopy is accommodated by means of the modification of the extinction coefficients of upward single scattering radiation and unscattered sunlight using the formulation of Nilson and Kuusk. To reduce the computation for the case of large optical thickness, an improved iteration formula is derived to speed convergence. The upwelling radiances have been evaluated for different atmospheric conditions, leaf area index (LAI), leaf angle distribution (LAD), leaf size and so on. The formulation presented in this paper is also well suited to analyze the relative magnitude of multiple scattering radiance and single scattering radiance in both the visible and near infrared regions.
NASA Technical Reports Server (NTRS)
Eyre, Francis B. (Inventor); Fink, Wolfgang (Inventor)
2011-01-01
Disclosed herein is a method of making a three dimensional mold comprising the steps of providing a mold substrate; exposing the substrate with an electromagnetic radiation source for a period of time sufficient to render the portion of the mold substrate susceptible to a developer to produce a modified mold substrate; and developing the modified mold with one or more developing reagents to remove the portion of the mold substrate rendered susceptible to the developer from the mold substrate, to produce the mold having a desired mold shape, wherein the electromagnetic radiation source has a fixed position, and wherein during the exposing step, the mold substrate is manipulated according to a manipulation algorithm in one or more dimensions relative to the electromagnetic radiation source; and wherein the manipulation algorithm is determined using stochastic optimization computations.
Hyer, D; Mart, C
2012-06-01
The aim of this study was to develop a phantom and analysis software that could be used to quickly and accurately determine the location of radiation isocenter using the Electronic Portal Imaging Device (EPID). The phantom could then be used as a static reference point for performing other tests including: radiation vs. light field coincidence, MLC and Jaw strip tests, and Varian Optical Guidance Platform (OGP) calibration. The solution proposed uses a collimator setting of 10×10 cm to acquire EPID images of the new phantom constructed from LEGO® blocks. Images from a number of gantry and collimator angles are analyzed by the software to determine the position of the jaws and center of the phantom in each image. The distance between a chosen jaw and the phantom center is then compared to the same distance measured after a 180 degree collimator rotation to determine if the phantom is centered in the dimension being investigated. The accuracy of the algorithm's measurements were verified by independent measurement to be approximately equal to the detector's pitch. Light versus radiation field as well as MLC and Jaw strip tests are performed using measurements based on the phantom center once located at the radiation isocenter. Reproducibility tests show that the algorithm's results were objectively repeatable. Additionally, the phantom and software are completely independent of linac vendor and this study presents results from two major linac manufacturers. An OGP calibration array was also integrated into the phantom to allow calibration of the OGP while the phantom is positioned at radiation isocenter to reduce setup uncertainty contained in the calibration. This solution offers a quick, objective method to perform isocenter localization as well as laser alignment, OGP calibration, and other tests on a monthly basis. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Blecic, Jasmina; Stemm, Madison M.; Lust, Nate B.; Foster, Andrew S.; Rojo, Patricio M.; Loredo, Thomas J.
2014-11-01
Multi-wavelength secondary-eclipse and transit depths probe the thermo-chemical properties of exoplanets. In recent years, several research groups have developed retrieval codes to analyze the existing data and study the prospects of future facilities. However, the scientific community has limited access to these packages. Here we premiere the open-source Bayesian Atmospheric Radiative Transfer (BART) code. We discuss the key aspects of the radiative-transfer algorithm and the statistical package. The radiation code includes line databases for all HITRAN molecules, high-temperature H2O, TiO, and VO, and includes a preprocessor for adding additional line databases without recompiling the radiation code. Collision-induced absorption lines are available for H2-H2 and H2-He. The parameterized thermal and molecular abundance profiles can be modified arbitrarily without recompilation. The generated spectra are integrated over arbitrary bandpasses for comparison to data. BART's statistical package, Multi-core Markov-chain Monte Carlo (MC3), is a general-purpose MCMC module. MC3 implements the Differental-evolution Markov-chain Monte Carlo algorithm (ter Braak 2006, 2009). MC3 converges 20-400 times faster than the usual Metropolis-Hastings MCMC algorithm, and in addition uses the Message Passing Interface (MPI) to parallelize the MCMC chains. We apply the BART retrieval code to the HD 209458b data set to estimate the planet's temperature profile and molecular abundances. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
Öğretici, Akın; Çakır, Aydın; Akbaş, Uğur; Köksal, Canan; Kalafat, Ümmühan; Tambaş, Makbule; Bilge, Hatice
2017-01-01
Purpose: This study aims to investigate the factors that reduce fetal dose in pregnant patients with breast cancer throughout their radiation treatment. Two main factors in a standard radiation oncology center are considered as the treatment planning systems (TPSs) and simple shielding for intensity modulated radiation therapy technique. Materials and Methods: TPS factor was evaluated with two different planning algorithms: Anisotropic analytical algorithm and Acuros XB (external beam). To evaluate the shielding factor, a standard radiological purpose lead apron was chosen. For both studies, thermoluminescence dosimeters were used to measure the point dose, and an Alderson RANDO-phantom was used to simulate a female pregnant patient in this study. Thirteen measurement points were chosen in the 32nd slice of the phantom to cover all possible locations of a fetus up to 8th week of gestation. Results: The results show that both of the TPS algorithms are incapable of calculating the fetal doses, therefore, unable to reduce them at the planning stage. Shielding with a standard lead apron, however, showed a slight radiation protection (about 4.7%) to the fetus decreasing the mean fetal dose from 84.8 mGy to 80.8 mGy, which cannot be disregarded in case of fetal irradiation. Conclusions: Using a lead apron for shielding the abdominal region of a pregnant patient during breast irradiation showed a minor advantage; however, its possible side effects (i.e., increased scattered radiation and skin dose) should also be investigated further to solidify its benefits. PMID:28974857
Wu, Wei; Fang, Qiang
2011-01-01
Printed Spiral Coil (PSC) is a coil antenna for near-field wireless power transmission to the next generation implant medical devices. PSC for implant medical device should be power efficient and low electromagnetic radiation to human tissues. We utilized a physical model of printed spiral coil and applied our algorithm to design PSC operating at 13.56 MHz. Numerical and electromagnetic simulation of power transfer efficiency of PSC in air medium is 77.5% and 71.1%, respectively. The simulation results show that the printed spiral coil which is optimized for air will keep 15.2% power transfer efficiency in human subcutaneous tissues. In addition, the Specific Absorption Ratio (SAR) for this coil antenna in subcutaneous at 13.56 MHz is below 1.6 W/Kg, which suggests this coil is implantable safe based on IEEE C95.1 safety guideline.
Zhou, Yongxin; Bai, Jing
2007-01-01
A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.
Thermal Model of a Current-Carrying Wire in a Vacuum
NASA Technical Reports Server (NTRS)
Border, James
2006-01-01
A computer program implements a thermal model of an insulated wire carrying electric current and surrounded by a vacuum. The model includes the effects of Joule heating, conduction of heat along the wire, and radiation of heat from the outer surface of the insulation on the wire. The model takes account of the temperature dependences of the thermal and electrical properties of the wire, the emissivity of the insulation, and the possibility that not only can temperature vary along the wire but, in addition, the ends of the wire can be thermally grounded at different temperatures. The resulting second-order differential equation for the steady-state temperature as a function of position along the wire is highly nonlinear. The wire is discretized along its length, and the equation is solved numerically by use of an iterative algorithm that utilizes a multidimensional version of the Newton-Raphson method.
Atmospheric correction for hyperspectral ocean color sensors
NASA Astrophysics Data System (ADS)
Ibrahim, A.; Ahmad, Z.; Franz, B. A.; Knobelspiesse, K. D.
2017-12-01
NASA's heritage Atmospheric Correction (AC) algorithm for multi-spectral ocean color sensors is inadequate for the new generation of spaceborne hyperspectral sensors, such as NASA's first hyperspectral Ocean Color Instrument (OCI) onboard the anticipated Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite mission. The AC process must estimate and remove the atmospheric path radiance contribution due to the Rayleigh scattering by air molecules and by aerosols from the measured top-of-atmosphere (TOA) radiance. Further, it must also compensate for the absorption by atmospheric gases and correct for reflection and refraction of the air-sea interface. We present and evaluate an improved AC for hyperspectral sensors beyond the heritage approach by utilizing the additional spectral information of the hyperspectral sensor. The study encompasses a theoretical radiative transfer sensitivity analysis as well as a practical application of the Hyperspectral Imager for the Coastal Ocean (HICO) and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensors.
SPIN: An Inversion Code for the Photospheric Spectral Line
NASA Astrophysics Data System (ADS)
Yadav, Rahul; Mathew, Shibu K.; Tiwary, Alok Ranjan
2017-08-01
Inversion codes are the most useful tools to infer the physical properties of the solar atmosphere from the interpretation of Stokes profiles. In this paper, we present the details of a new Stokes Profile INversion code (SPIN) developed specifically to invert the spectro-polarimetric data of the Multi-Application Solar Telescope (MAST) at Udaipur Solar Observatory. The SPIN code has adopted Milne-Eddington approximations to solve the polarized radiative transfer equation (RTE) and for the purpose of fitting a modified Levenberg-Marquardt algorithm has been employed. We describe the details and utilization of the SPIN code to invert the spectro-polarimetric data. We also present the details of tests performed to validate the inversion code by comparing the results from the other widely used inversion codes (VFISV and SIR). The inverted results of the SPIN code after its application to Hinode/SP data have been compared with the inverted results from other inversion codes.
Undulator performance on PEP storage ring with different optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shenoy, G.K.; Viccaro, P.J.; Alp, E.E.
1987-12-01
Our ability to profitably utilize the radiation from undulators proposed for PEP is determined by their performance in the different operating modes and by whether the design tolerance required for acceptable operation of the device can be met with available technology. The purpose of this paper is to provide spectral characteristics for some typical devices calculated using a Monte-Carlo algorithm in which the Lienard-Wiechert potential is integrated over the trajectory of the charged particle along the undulator length. The actual emittance of the particle beam for the various PEP operating modes are included explicitly in the simulations. In addition, wemore » have carried out a single partial analysis of the effects of undulator magnetic field errors on the spectral properties in order to estimate the design tolerance requirements necessary for devices which have been proposed PEP. 6 figs.« less
NASA Astrophysics Data System (ADS)
Sharifi, Hoda; Zhang, Hong; Bagher-Ebadian, Hassan; Lu, Wei; Ajlouni, Munther I.; Jin, Jian-Yue; (Spring Kong, Feng-Ming; Chetty, Indrin J.; Zhong, Hualiang
2018-03-01
Tumor response to radiation treatment (RT) can be evaluated from changes in metabolic activity between two positron emission tomography (PET) images. Activity changes at individual voxels in pre-treatment PET images (PET1), however, cannot be derived until their associated PET-CT (CT1) images are appropriately registered to during-treatment PET-CT (CT2) images. This study aimed to investigate the feasibility of using deformable image registration (DIR) techniques to quantify radiation-induced metabolic changes on PET images. Five patients with non-small-cell lung cancer (NSCLC) treated with adaptive radiotherapy were considered. PET-CTs were acquired two weeks before RT and 18 fractions after the start of RT. DIR was performed from CT1 to CT2 using B-Spline and diffeomorphic Demons algorithms. The resultant displacements in the tumor region were then corrected using a hybrid finite element method (FEM). Bitmap masks generated from gross tumor volumes (GTVs) in PET1 were deformed using the four different displacement vector fields (DVFs). The conservation of total lesion glycolysis (TLG) in GTVs was used as a criterion to evaluate the quality of these registrations. The deformed masks were united to form a large mask which was then partitioned into multiple layers from center to border. The averages of SUV changes over all the layers were 1.0 ± 1.3, 1.0 ± 1.2, 0.8 ± 1.3, 1.1 ± 1.5 for the B-Spline, B-Spline + FEM, Demons and Demons + FEM algorithms, respectively. TLG changes before and after mapping using B-Spline, Demons, hybrid-B-Spline, and hybrid-Demons registrations were 20.2%, 28.3%, 8.7%, and 2.2% on average, respectively. Compared to image intensity-based DIR algorithms, the hybrid FEM modeling technique is better in preserving TLG and could be useful for evaluation of tumor response for patients with regressing tumors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maier, Andreas; Wigstroem, Lars; Hofmann, Hannes G.
2011-11-15
Purpose: The combination of quickly rotating C-arm gantry with digital flat panel has enabled the acquisition of three-dimensional data (3D) in the interventional suite. However, image quality is still somewhat limited since the hardware has not been optimized for CT imaging. Adaptive anisotropic filtering has the ability to improve image quality by reducing the noise level and therewith the radiation dose without introducing noticeable blurring. By applying the filtering prior to 3D reconstruction, noise-induced streak artifacts are reduced as compared to processing in the image domain. Methods: 3D anisotropic adaptive filtering was used to process an ensemble of 2D x-raymore » views acquired along a circular trajectory around an object. After arranging the input data into a 3D space (2D projections + angle), the orientation of structures was estimated using a set of differently oriented filters. The resulting tensor representation of local orientation was utilized to control the anisotropic filtering. Low-pass filtering is applied only along structures to maintain high spatial frequency components perpendicular to these. The evaluation of the proposed algorithm includes numerical simulations, phantom experiments, and in-vivo data which were acquired using an AXIOM Artis dTA C-arm system (Siemens AG, Healthcare Sector, Forchheim, Germany). Spatial resolution and noise levels were compared with and without adaptive filtering. A human observer study was carried out to evaluate low-contrast detectability. Results: The adaptive anisotropic filtering algorithm was found to significantly improve low-contrast detectability by reducing the noise level by half (reduction of the standard deviation in certain areas from 74 to 30 HU). Virtually no degradation of high contrast spatial resolution was observed in the modulation transfer function (MTF) analysis. Although the algorithm is computationally intensive, hardware acceleration using Nvidia's CUDA Interface provided an 8.9-fold speed-up of the processing (from 1336 to 150 s). Conclusions: Adaptive anisotropic filtering has the potential to substantially improve image quality and/or reduce the radiation dose required for obtaining 3D image data using cone beam CT.« less
Approximate Dynamic Programming Algorithms for United States Air Force Officer Sustainment
2015-03-26
level of correction needed. While paying bonuses has an easily calculable cost, RIFs have more subtle costs. Mone (1994) discovered that in a steady...a regression is performed utilizing instrumental variables to minimize Bellman error. This algorithm uses a set of basis functions to approximate the...transitioned to an all-volunteer force. Charnes et al. (1972) utilize a goal programming model for General Schedule civilian manpower management in the
Rodriguez, Robert M; Hendey, Gregory W; Mower, William R
2017-01-01
Chest imaging plays a prominent role in blunt trauma patient evaluation, but indiscriminate imaging is expensive, may delay care, and unnecessarily exposes patients to potentially harmful ionizing radiation. To improve diagnostic chest imaging utilization, we conducted 3 prospective multicenter studies over 12years to derive and validate decision instruments (DIs) to guide the use of chest x-ray (CXR) and chest computed tomography (CT). The first DI, NEXUS Chest x-ray, consists of seven criteria (Age >60years; rapid deceleration mechanism; chest pain; intoxication; altered mental status; distracting painful injury; and chest wall tenderness) and exhibits a sensitivity of 99.0% (95% confidence interval [CI] 98.2-99.4%) and a specificity of 13.3% (95% CI, 12.6%-14.0%) for detecting clinically significant injuries. We developed two NEXUS Chest CT DIs, which are both highly reliable in detecting clinically major injuries (sensitivity of 99.2%; 95% CI 95.4-100%). Designed primarily to focus on detecting major injuries, the NEXUS Chest CT-Major DI consists of six criteria (abnormal CXR; distracting injury; chest wall tenderness; sternal tenderness; thoracic spine tenderness; and scapular tenderness) and exhibits higher specificity (37.9%; 95% CI 35.8-40.1%). Designed to reliability detect both major and minor injuries (sensitivity 95.4%; 95% CI 93.6-96.9%) with resulting lower specificity (25.5%; 95% CI 23.5-27.5%), the NEXUS CT-All rule consists of seven elements (the six NEXUS CT-Major criteria plus rapid deceleration mechanism). The purpose of this review is to synthesize the three DIs into a novel, cohesive summary algorithm with practical implementation recommendations to guide selective chest imaging in adult blunt trauma patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Validity of Five Satellite-Based Latent Heat Flux Algorithms for Semi-arid Ecosystems
Feng, Fei; Chen, Jiquan; Li, Xianglan; ...
2015-12-09
Accurate estimation of latent heat flux (LE) is critical in characterizing semiarid ecosystems. Many LE algorithms have been developed during the past few decades. However, the algorithms have not been directly compared, particularly over global semiarid ecosystems. In this paper, we evaluated the performance of five LE models over semiarid ecosystems such as grassland, shrub, and savanna using the Fluxnet dataset of 68 eddy covariance (EC) sites during the period 2000–2009. We also used a modern-era retrospective analysis for research and applications (MERRA) dataset, the Normalized Difference Vegetation Index (NDVI) and Fractional Photosynthetically Active Radiation (FPAR) from the moderate resolutionmore » imaging spectroradiometer (MODIS) products; the leaf area index (LAI) from the global land surface satellite (GLASS) products; and the digital elevation model (DEM) from shuttle radar topography mission (SRTM30) dataset to generate LE at region scale during the period 2003–2006. The models were the moderate resolution imaging spectroradiometer LE (MOD16) algorithm, revised remote sensing based Penman–Monteith LE algorithm (RRS), the Priestley–Taylor LE algorithm of the Jet Propulsion Laboratory (PT-JPL), the modified satellite-based Priestley–Taylor LE algorithm (MS-PT), and the semi-empirical Penman LE algorithm (UMD). Direct comparison with ground measured LE showed the PT-JPL and MS-PT algorithms had relative high performance over semiarid ecosystems with the coefficient of determination (R2) ranging from 0.6 to 0.8 and root mean squared error (RMSE) of approximately 20 W/m 2. Empirical parameters in the structure algorithms of MOD16 and RRS, and calibrated coefficients of the UMD algorithm may be the cause of the reduced performance of these LE algorithms with R2 ranging from 0.5 to 0.7 and RMSE ranging from 20 to 35 W/m 2 for MOD16, RRS and UMD. Sensitivity analysis showed that radiation and vegetation terms were the dominating variables affecting LE Fluxes in global semiarid ecosystem.« less
Validity of Five Satellite-Based Latent Heat Flux Algorithms for Semi-arid Ecosystems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Fei; Chen, Jiquan; Li, Xianglan
Accurate estimation of latent heat flux (LE) is critical in characterizing semiarid ecosystems. Many LE algorithms have been developed during the past few decades. However, the algorithms have not been directly compared, particularly over global semiarid ecosystems. In this paper, we evaluated the performance of five LE models over semiarid ecosystems such as grassland, shrub, and savanna using the Fluxnet dataset of 68 eddy covariance (EC) sites during the period 2000–2009. We also used a modern-era retrospective analysis for research and applications (MERRA) dataset, the Normalized Difference Vegetation Index (NDVI) and Fractional Photosynthetically Active Radiation (FPAR) from the moderate resolutionmore » imaging spectroradiometer (MODIS) products; the leaf area index (LAI) from the global land surface satellite (GLASS) products; and the digital elevation model (DEM) from shuttle radar topography mission (SRTM30) dataset to generate LE at region scale during the period 2003–2006. The models were the moderate resolution imaging spectroradiometer LE (MOD16) algorithm, revised remote sensing based Penman–Monteith LE algorithm (RRS), the Priestley–Taylor LE algorithm of the Jet Propulsion Laboratory (PT-JPL), the modified satellite-based Priestley–Taylor LE algorithm (MS-PT), and the semi-empirical Penman LE algorithm (UMD). Direct comparison with ground measured LE showed the PT-JPL and MS-PT algorithms had relative high performance over semiarid ecosystems with the coefficient of determination (R2) ranging from 0.6 to 0.8 and root mean squared error (RMSE) of approximately 20 W/m 2. Empirical parameters in the structure algorithms of MOD16 and RRS, and calibrated coefficients of the UMD algorithm may be the cause of the reduced performance of these LE algorithms with R2 ranging from 0.5 to 0.7 and RMSE ranging from 20 to 35 W/m 2 for MOD16, RRS and UMD. Sensitivity analysis showed that radiation and vegetation terms were the dominating variables affecting LE Fluxes in global semiarid ecosystem.« less
Localization of non-stationary sources of electromagnetic radiation with the aid of phasometry
NASA Technical Reports Server (NTRS)
Mersov, G. A.
1978-01-01
The possibility of localizing sources of electromagnetic radiation by measurement of the time of passage of the radiation or the measurement of its phase at various points of cosmic space, at which are located satellite observatories is examined. Algorithms are proposed for localization using two, three, and four astronomical observatories. The precision of the localization and several partial results of practical significance are deduced.
Ning, Peigang; Zhu, Shaocheng; Shi, Dapeng; Guo, Ying; Sun, Minghua
2014-01-01
This work aims to explore the effects of adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) algorithms in reducing computed tomography (CT) radiation dosages in abdominal imaging. CT scans on a standard male phantom were performed at different tube currents. Images at the different tube currents were reconstructed with the filtered back-projection (FBP), 50% ASiR and MBIR algorithms and compared. The CT value, image noise and contrast-to-noise ratios (CNRs) of the reconstructed abdominal images were measured. Volumetric CT dose indexes (CTDIvol) were recorded. At different tube currents, 50% ASiR and MBIR significantly reduced image noise and increased the CNR when compared with FBP. The minimal tube current values required by FBP, 50% ASiR, and MBIR to achieve acceptable image quality using this phantom were 200, 140, and 80 mA, respectively. At the identical image quality, 50% ASiR and MBIR reduced the radiation dose by 35.9% and 59.9% respectively when compared with FBP. Advanced iterative reconstruction techniques are able to reduce image noise and increase image CNRs. Compared with FBP, 50% ASiR and MBIR reduced radiation doses by 35.9% and 59.9%, respectively.
Dose specification for radiation therapy: dose to water or dose to medium?
NASA Astrophysics Data System (ADS)
Ma, C.-M.; Li, Jinsheng
2011-05-01
The Monte Carlo method enables accurate dose calculation for radiation therapy treatment planning and has been implemented in some commercial treatment planning systems. Unlike conventional dose calculation algorithms that provide patient dose information in terms of dose to water with variable electron density, the Monte Carlo method calculates the energy deposition in different media and expresses dose to a medium. This paper discusses the differences in dose calculated using water with different electron densities and that calculated for different biological media and the clinical issues on dose specification including dose prescription and plan evaluation using dose to water and dose to medium. We will demonstrate that conventional photon dose calculation algorithms compute doses similar to those simulated by Monte Carlo using water with different electron densities, which are close (<4% differences) to doses to media but significantly different (up to 11%) from doses to water converted from doses to media following American Association of Physicists in Medicine (AAPM) Task Group 105 recommendations. Our results suggest that for consistency with previous radiation therapy experience Monte Carlo photon algorithms report dose to medium for radiotherapy dose prescription, treatment plan evaluation and treatment outcome analysis.
NASA Astrophysics Data System (ADS)
García Muñoz, A.; Mills, F. P.
2015-01-01
Context. The interpretation of polarised radiation emerging from a planetary atmosphere must rely on solutions to the vector radiative transport equation (VRTE). Monte Carlo integration of the VRTE is a valuable approach for its flexible treatment of complex viewing and/or illumination geometries, and it can intuitively incorporate elaborate physics. Aims: We present a novel pre-conditioned backward Monte Carlo (PBMC) algorithm for solving the VRTE and apply it to planetary atmospheres irradiated from above. As classical BMC methods, our PBMC algorithm builds the solution by simulating the photon trajectories from the detector towards the radiation source, i.e. in the reverse order of the actual photon displacements. Methods: We show that the neglect of polarisation in the sampling of photon propagation directions in classical BMC algorithms leads to unstable and biased solutions for conservative, optically-thick, strongly polarising media such as Rayleigh atmospheres. The numerical difficulty is avoided by pre-conditioning the scattering matrix with information from the scattering matrices of prior (in the BMC integration order) photon collisions. Pre-conditioning introduces a sense of history in the photon polarisation states through the simulated trajectories. Results: The PBMC algorithm is robust, and its accuracy is extensively demonstrated via comparisons with examples drawn from the literature for scattering in diverse media. Since the convergence rate for MC integration is independent of the integral's dimension, the scheme is a valuable option for estimating the disk-integrated signal of stellar radiation reflected from planets. Such a tool is relevant in the prospective investigation of exoplanetary phase curves. We lay out two frameworks for disk integration and, as an application, explore the impact of atmospheric stratification on planetary phase curves for large star-planet-observer phase angles. By construction, backward integration provides a better control than forward integration over the planet region contributing to the solution, and this presents a clear advantage when estimating the disk-integrated signal at moderate and large phase angles. A one-slab, plane-parallel version of the PBMC algorithm is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A72
"Radio-oncomics" : The potential of radiomics in radiation oncology.
Peeken, Jan Caspar; Nüsslin, Fridtjof; Combs, Stephanie E
2017-10-01
Radiomics, a recently introduced concept, describes quantitative computerized algorithm-based feature extraction from imaging data including computer tomography (CT), magnetic resonance imaging (MRT), or positron-emission tomography (PET) images. For radiation oncology it offers the potential to significantly influence clinical decision-making and thus therapy planning and follow-up workflow. After image acquisition, image preprocessing, and defining regions of interest by structure segmentation, algorithms are applied to calculate shape, intensity, texture, and multiscale filter features. By combining multiple features and correlating them with clinical outcome, prognostic models can be created. Retrospective studies have proposed radiomics classifiers predicting, e. g., overall survival, radiation treatment response, distant metastases, or radiation-related toxicity. Besides, radiomics features can be correlated with genomic information ("radiogenomics") and could be used for tumor characterization. Distinct patterns based on data-based as well as genomics-based features will influence radiation oncology in the future. Individualized treatments in terms of dose level adaption and target volume definition, as well as other outcome-related parameters will depend on radiomics and radiogenomics. By integration of various datasets, the prognostic power can be increased making radiomics a valuable part of future precision medicine approaches. This perspective demonstrates the evidence for the radiomics concept in radiation oncology. The necessity of further studies to integrate radiomics classifiers into clinical decision-making and the radiation therapy workflow is emphasized.
NASA Astrophysics Data System (ADS)
Heidari, A. A.; Moayedi, A.; Abbaspour, R. Ali
2017-09-01
Automated fare collection (AFC) systems are regarded as valuable resources for public transport planners. In this paper, the AFC data are utilized to analysis and extract mobility patterns in a public transportation system. For this purpose, the smart card data are inserted into a proposed metaheuristic-based aggregation model and then converted to O-D matrix between stops, since the size of O-D matrices makes it difficult to reproduce the measured passenger flows precisely. The proposed strategy is applied to a case study from Haaglanden, Netherlands. In this research, moth-flame optimizer (MFO) is utilized and evaluated for the first time as a new metaheuristic algorithm (MA) in estimating transit origin-destination matrices. The MFO is a novel, efficient swarm-based MA inspired from the celestial navigation of moth insects in nature. To investigate the capabilities of the proposed MFO-based approach, it is compared to methods that utilize the K-means algorithm, gray wolf optimization algorithm (GWO) and genetic algorithm (GA). The sum of the intra-cluster distances and computational time of operations are considered as the evaluation criteria to assess the efficacy of the optimizers. The optimality of solutions of different algorithms is measured in detail. The traveler's behavior is analyzed to achieve to a smooth and optimized transport system. The results reveal that the proposed MFO-based aggregation strategy can outperform other evaluated approaches in terms of convergence tendency and optimality of the results. The results show that it can be utilized as an efficient approach to estimating the transit O-D matrices.
Lin, Zhenyi; Li, Wei; Gatebe, Charles; Poudyal, Rajesh; Stamnes, Knut
2016-02-20
An optimized discrete-ordinate radiative transfer model (DISORT3) with a pseudo-two-dimensional bidirectional reflectance distribution function (BRDF) is used to simulate and validate ocean glint reflectances at an infrared wavelength (1036 nm) by matching model results with a complete set of BRDF measurements obtained from the NASA cloud absorption radiometer (CAR) deployed on an aircraft. The surface roughness is then obtained through a retrieval algorithm and is used to extend the simulation into the visible spectral range where diffuse reflectance becomes important. In general, the simulated reflectances and surface roughness information are in good agreement with the measurements, and the diffuse reflectance in the visible, ignored in current glint algorithms, is shown to be important. The successful implementation of this new treatment of ocean glint reflectance and surface roughness in DISORT3 will help improve glint correction algorithms in current and future ocean color remote sensing applications.
NASA Technical Reports Server (NTRS)
Lin, Zhenyi; Li, Wei; Gatebe, Charles; Poudyal, Rajesh; Stamnes, Knut
2016-01-01
An optimized discrete-ordinate radiative transfer model (DISORT3) with a pseudo-two-dimensional bidirectional reflectance distribution function (BRDF) is used to simulate and validate ocean glint reflectances at an infrared wavelength (1036 nm) by matching model results with a complete set of BRDF measurements obtained from the NASA cloud absorption radiometer (CAR) deployed on an aircraft. The surface roughness is then obtained through a retrieval algorithm and is used to extend the simulation into the visible spectral range where diffuse reflectance becomes important. In general, the simulated reflectances and surface roughness information are in good agreement with the measurements, and the diffuse reflectance in the visible, ignored in current glint algorithms, is shown to be important. The successful implementation of this new treatment of ocean glint reflectance and surface roughness in DISORT3 will help improve glint correction algorithms in current and future ocean color remote sensing applications.
NASA Astrophysics Data System (ADS)
Niu, Yingli; Li, Wenqiang; Peng, Qian; Geng, Hua; Yi, Yuanping; Wang, Linjun; Nan, Guangjun; Wang, Dong; Shuai, Zhigang
2018-04-01
MOlecular MAterials Property Prediction Package (MOMAP) is a software toolkit for molecular materials property prediction. It focuses on luminescent properties and charge mobility properties. This article contains a brief descriptive introduction of key features, theoretical models and algorithms of the software, together with examples that illustrate the performance. First, we present the theoretical models and algorithms for molecular luminescent properties calculation, which includes the excited-state radiative/non-radiative decay rate constant and the optical spectra. Then, a multi-scale simulation approach and its algorithm for the molecular charge mobility are described. This approach is based on hopping model and combines with Kinetic Monte Carlo and molecular dynamics simulations, and it is especially applicable for describing a large category of organic semiconductors, whose inter-molecular electronic coupling is much smaller than intra-molecular charge reorganisation energy.
CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife
NASA Astrophysics Data System (ADS)
Kearney, Vasant; Cheung, Joey P.; McGuinness, Christopher; Solberg, Timothy D.
2017-07-01
The goal of this study is to demonstrate the feasibility of a novel non-coplanar-arc optimization algorithm (CyberArc). This method aims to reduce the delivery time of conventional CyberKnife treatments by allowing for continuous beam delivery. CyberArc uses a 4 step optimization strategy, in which nodes, beams, and collimator sizes are determined, source trajectories are calculated, intermediate radiation models are generated, and final monitor units are calculated, for the continuous radiation source model. The dosimetric results as well as the time reduction factors for CyberArc are presented for 7 prostate and 2 brain cases. The dosimetric quality of the CyberArc plans are evaluated using conformity index, heterogeneity index, local confined normalized-mutual-information, and various clinically relevant dosimetric parameters. The results indicate that the CyberArc algorithm dramatically reduces the treatment time of CyberKnife plans while simultaneously preserving the dosimetric quality of the original plans.
CyberArc: a non-coplanar-arc optimization algorithm for CyberKnife.
Kearney, Vasant; Cheung, Joey P; McGuinness, Christopher; Solberg, Timothy D
2017-06-26
The goal of this study is to demonstrate the feasibility of a novel non-coplanar-arc optimization algorithm (CyberArc). This method aims to reduce the delivery time of conventional CyberKnife treatments by allowing for continuous beam delivery. CyberArc uses a 4 step optimization strategy, in which nodes, beams, and collimator sizes are determined, source trajectories are calculated, intermediate radiation models are generated, and final monitor units are calculated, for the continuous radiation source model. The dosimetric results as well as the time reduction factors for CyberArc are presented for 7 prostate and 2 brain cases. The dosimetric quality of the CyberArc plans are evaluated using conformity index, heterogeneity index, local confined normalized-mutual-information, and various clinically relevant dosimetric parameters. The results indicate that the CyberArc algorithm dramatically reduces the treatment time of CyberKnife plans while simultaneously preserving the dosimetric quality of the original plans.
NASA Astrophysics Data System (ADS)
Wichert, Viktoria; Arkenberg, Mario; Hauschildt, Peter H.
2016-10-01
Highly resolved state-of-the-art 3D atmosphere simulations will remain computationally extremely expensive for years to come. In addition to the need for more computing power, rethinking coding practices is necessary. We take a dual approach by introducing especially adapted, parallel numerical methods and correspondingly parallelizing critical code passages. In the following, we present our respective work on PHOENIX/3D. With new parallel numerical algorithms, there is a big opportunity for improvement when iteratively solving the system of equations emerging from the operator splitting of the radiative transfer equation J = ΛS. The narrow-banded approximate Λ-operator Λ* , which is used in PHOENIX/3D, occurs in each iteration step. By implementing a numerical algorithm which takes advantage of its characteristic traits, the parallel code's efficiency is further increased and a speed-up in computational time can be achieved.
NASA Astrophysics Data System (ADS)
Chen, Jun; Zhang, Xiangguang; Xing, Xiaogang; Ishizaka, Joji; Yu, Zhifeng
2017-12-01
Quantifying the diffuse attenuation coefficient of the photosynthetically available radiation (Kpar) can improve our knowledge of euphotic depth (Zeu) and biomass heating effects in the upper layers of oceans. An algorithm to semianalytically derive Kpar from remote sensing reflectance (Rrs) is developed for the global open oceans. This algorithm includes the following two portions: (1) a neural network model for deriving the diffuse attention coefficients (Kd) that considers the residual error in satellite Rrs, and (2) a three band depth-dependent Kpar algorithm (TDKA) for describing the spectrally selective attenuation mechanism of underwater solar radiation in the open oceans. This algorithm is evaluated with both in situ PAR profile data and satellite images, and the results show that it can produce acceptable PAR profile estimations while clearly removing the impacts of satellite residual errors on Kpar estimations. Furthermore, the performance of the TDKA algorithm is evaluated by its applicability in Zeu derivation and mean temperature within a mixed layer depth (TML) simulation, and the results show that it can significantly decrease the uncertainty in both compared with the classical chlorophyll-a concentration-based Kpar algorithm. Finally, the TDKA algorithm is applied in simulating biomass heating effects in the Sargasso Sea near Bermuda, with new Kpar data it is found that the biomass heating effects can lead to a 3.4°C maximum positive difference in temperature in the upper layers but could result in a 0.67°C maximum negative difference in temperature in the deep layers.
Solving radiative transfer with line overlaps using Gauss-Seidel algorithms
NASA Astrophysics Data System (ADS)
Daniel, F.; Cernicharo, J.
2008-09-01
Context: The improvement in observational facilities requires refining the modelling of the geometrical structures of astrophysical objects. Nevertheless, for complex problems such as line overlap in molecules showing hyperfine structure, a detailed analysis still requires a large amount of computing time and thus, misinterpretation cannot be dismissed due to an undersampling of the whole space of parameters. Aims: We extend the discussion of the implementation of the Gauss-Seidel algorithm in spherical geometry and include the case of hyperfine line overlap. Methods: We first review the basics of the short characteristics method that is used to solve the radiative transfer equations. Details are given on the determination of the Lambda operator in spherical geometry. The Gauss-Seidel algorithm is then described and, by analogy to the plan-parallel case, we see how to introduce it in spherical geometry. Doing so requires some approximations in order to keep the algorithm competitive. Finally, line overlap effects are included. Results: The convergence speed of the algorithm is compared to the usual Jacobi iterative schemes. The gain in the number of iterations is typically factors of 2 and 4 for the two implementations made of the Gauss-Seidel algorithm. This is obtained despite the introduction of approximations in the algorithm. A comparison of results obtained with and without line overlaps for N2H^+, HCN, and HNC shows that the J=3-2 line intensities are significantly underestimated in models where line overlap is neglected.
Algorithmic network monitoring for a modern water utility: a case study in Jerusalem.
Armon, A; Gutner, S; Rosenberg, A; Scolnicov, H
2011-01-01
We report on the design, deployment, and use of TaKaDu, a real-time algorithmic Water Infrastructure Monitoring solution, with a strong focus on water loss reduction and control. TaKaDu is provided as a commercial service to several customers worldwide. It has been in use at HaGihon, the Jerusalem utility, since mid 2009. Water utilities collect considerable real-time data from their networks, e.g. by means of a SCADA system and sensors measuring flow, pressure, and other data. We discuss how an algorithmic statistical solution analyses this wealth of raw data, flexibly using many types of input and picking out and reporting significant events and failures in the network. Of particular interest to most water utilities is the early detection capability for invisible leaks, also a means for preventing large visible bursts. The system also detects sensor and SCADA failures, various water quality issues, DMA boundary breaches, unrecorded or unintended network changes (like a valve or pump state change), and other events, including types unforeseen during system design. We discuss results from use at HaGihon, showing clear operational value.
NASA Technical Reports Server (NTRS)
Chance, K. V.
2001-01-01
This report summarizes research done under NASA Grant NAG5-3461 from November 1, 1996 through December 31, 2000. The research performed during this reporting period includes development and maintenance of scientific software for the GOME retrieval algorithms, consultation on operational software development for GOME, sensitivity and instrument studies to help finalize the definition of the SCIAMACHY instrument, leading the development of the SCIAMACHY Scientific Requirements Document for Data and Algorithm Development, consultation and development for SCIAMACHY near-real-time (NRT) and off-line (OL) data products, radiative transfer model development for utilization in GOME, SCIAMACHY and other programs, development of infrared line-by-line atmospheric modeling and retrieval capability for SCIAMACHY, and participation in GOME and SCIAMACHY validation studies. The Global Ozone Monitoring Experiment was successfully launched on the ERS-2 satellite on April 20, 1995, and remains working in normal fashion. SCIAMACHY is currently planned for launch in late 2001 on the ESA Envisat satellite. Three GOME-2 instruments are now scheduled to fly on the Metop series of operational meteorological satellites (Eumetsat). K. Chance is a member of the reconstituted GOME Scientific Advisory Group, which will guide the GOME-2 program as well as the continuing ERS-2 GOME program.
Sub-aperture stitching test of a cylindrical mirror with large aperture
NASA Astrophysics Data System (ADS)
Xue, Shuai; Chen, Shanyong; Shi, Feng; Lu, Jinfeng
2016-09-01
Cylindrical mirrors are key optics of high-end equipment of national defense and scientific research such as high energy laser weapons, synchrotron radiation system, etc. However, its surface error test technology develops slowly. As a result, its optical processing quality can not meet the requirements, and the developing of the associated equipment is hindered. Computer Generated-Hologram (CGH) is commonly utilized as null for testing cylindrical optics. However, since the fabrication process of CGH with large aperture is not sophisticated yet, the null test of cylindrical optics with large aperture is limited by the aperture of the CGH. Hence CGH null test combined with sub-aperture stitching method is proposed to break the limit of the aperture of CGH for testing cylindrical optics, and the design of CGH for testing cylindrical surfaces is analyzed. Besides, the misalignment aberration of cylindrical surfaces is different from that of the rotational symmetric surfaces since the special shape of cylindrical surfaces, and the existing stitching algorithm of rotational symmetric surfaces can not meet the requirements of stitching cylindrical surfaces. We therefore analyze the misalignment aberrations of cylindrical surfaces, and study the stitching algorithm for measuring cylindrical optics with large aperture. Finally we test a cylindrical mirror with large aperture to verify the validity of the proposed method.
Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G
2016-04-01
This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.
USDA-ARS?s Scientific Manuscript database
In this research, a multispectral algorithm derived from hyperspectral line-scan fluorescence imaging under violet LED excitation was developed for the detection of frass contamination on mature tomatoes. The algorithm utilized the fluorescence intensities at two wavebands, 664 nm and 690 nm, for co...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-27
... provides a ``menu'' of matching algorithms to choose from when executing incoming electronic orders. The menu format allows the Exchange to utilize different matching algorithms on a class-by-class basis. The menu includes, among other choices, the ultimate matching algorithm (``UMA''), as well as price-time...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-18
... Change, as Modified by Amendment No. 1 Thereto, Related to the Hybrid Matching Algorithms May 12, 2010... allocation algorithms to choose from when executing incoming electronic orders. The menu format allows the Exchange to utilize different allocation algorithms on a class-by-class basis. The menu includes, among...
Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter
NASA Astrophysics Data System (ADS)
Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.
2018-04-01
Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.
Multimodal Estimation of Distribution Algorithms.
Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun
2016-02-15
Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.
40 CFR 1065.372 - NDUV analyzer HC and H2O interference verification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... compensation algorithms that utilize measurements of other gases to meet this interference verification, simultaneously conduct such measurements to test the algorithms during the analyzer interference verification. (c...
Development of advanced acreage estimation methods
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1982-01-01
The development of an accurate and efficient algorithm for analyzing the structure of MSS data, the application of the Akaiki information criterion to mixture models, and a research plan to delineate some of the technical issues and associated tasks in the area of rice scene radiation characterization are discussed. The AMOEBA clustering algorithm is refined and documented.
An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves
Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing
2014-01-01
Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181
NASA Technical Reports Server (NTRS)
Taylor, Robert P.; Luck, Rogelio
1995-01-01
The view factors which are used in diffuse-gray radiation enclosure calculations are often computed by approximate numerical integrations. These approximately calculated view factors will usually not satisfy the important physical constraints of reciprocity and closure. In this paper several view-factor rectification algorithms are reviewed and a rectification algorithm based on a least-squares numerical filtering scheme is proposed with both weighted and unweighted classes. A Monte-Carlo investigation is undertaken to study the propagation of view-factor and surface-area uncertainties into the heat transfer results of the diffuse-gray enclosure calculations. It is found that the weighted least-squares algorithm is vastly superior to the other rectification schemes for the reduction of the heat-flux sensitivities to view-factor uncertainties. In a sample problem, which has proven to be very sensitive to uncertainties in view factor, the heat transfer calculations with weighted least-squares rectified view factors are very good with an original view-factor matrix computed to only one-digit accuracy. All of the algorithms had roughly equivalent effects on the reduction in sensitivity to area uncertainty in this case study.
A SAT Based Effective Algorithm for the Directed Hamiltonian Cycle Problem
NASA Astrophysics Data System (ADS)
Jäger, Gerold; Zhang, Weixiong
The Hamiltonian cycle problem (HCP) is an important combinatorial problem with applications in many areas. While thorough theoretical and experimental analyses have been made on the HCP in undirected graphs, little is known for the HCP in directed graphs (DHCP). The contribution of this work is an effective algorithm for the DHCP. Our algorithm explores and exploits the close relationship between the DHCP and the Assignment Problem (AP) and utilizes a technique based on Boolean satisfiability (SAT). By combining effective algorithms for the AP and SAT, our algorithm significantly outperforms previous exact DHCP algorithms including an algorithm based on the award-winning Concorde TSP algorithm.
The Plane-parallel Albedo Bias of Liquid Clouds from MODIS Observations
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Cahalan, Robert F.; Platnick, Steven
2007-01-01
In our most advanced modeling tools for climate change prediction, namely General Circulation Models (GCMs), the schemes used to calculate the budget of solar and thermal radiation commonly assume that clouds are horizontally homogeneous at scales as large as a few hundred kilometers. However, this assumption, used for convenience, computational speed, and lack of knowledge on cloud small scale variability, leads to erroneous estimates of the radiation budget. This paper provides a global picture of the solar radiation errors at scales of approximately 100 km due to warm (liquid phase) clouds only. To achieve this, we use cloud retrievals from the instrument MODIS on the Terra and Aqua satellites, along with atmospheric and surface information, as input into a GCM-style radiative transfer algorithm. Since the MODIS product contains information on cloud variability below 100 km we can run the radiation algorithm both for the variable and the (assumed) homogeneous clouds. The difference between these calculations for reflected or transmitted solar radiation constitutes the bias that GCMs would commit if they were able to perfectly predict the properties of warm clouds, but then assumed they were homogeneous for radiation calculations. We find that the global average of this bias is approx.2-3 times larger in terms of energy than the additional amount of thermal energy that would be trapped if we were to double carbon dioxide from current concentrations. We should therefore make a greater effort to predict horizontal cloud variability in GCMs and account for its effects in radiation calculations.
Short-term solar irradiance forecasting via satellite/model coupling
Miller, Steven D.; Rogers, Matthew A.; Haynes, John M.; ...
2017-12-01
The short-term (0-3 h) prediction of solar insolation for renewable energy production is a problem well-suited to satellite-based techniques. The spatial, spectral, temporal and radiometric resolution of instrumentation hosted on the geostationary platform allows these satellites to describe the current cloud spatial distribution and optical properties. These properties relate directly to the transient properties of the downwelling solar irradiance at the surface, which come in the form of 'ramps' that pose a central challenge to energy load balancing in a spatially distributed network of solar farms. The short-term evolution of the cloud field may be approximated to first order simplymore » as translational, but care must be taken in how the advection is handled and where the impacts are assigned. In this research, we describe how geostationary satellite observations are used with operational cloud masking and retrieval algorithms, wind field data from Numerical Weather Prediction (NWP), and radiative transfer calculations to produce short-term forecasts of solar insolation for applications in solar power generation. The scheme utilizes retrieved cloud properties to group pixels into contiguous cloud objects whose future positions are predicted using four-dimensional (space + time) model wind fields, selecting steering levels corresponding to the cloud height properties of each cloud group. The shadows associated with these clouds are adjusted for sensor viewing parallax displacement and combined with solar geometry and terrain height to determine the actual location of cloud shadows. For mid/high-level clouds at mid-latitudes and high solar zenith angles, the combined displacements from these geometric considerations are non-negligible. The cloud information is used to initialize a radiative transfer model that computes the direct and diffuse-sky solar insolation at both shadow locations and intervening clear-sky regions. Here, we describe the formulation of the algorithm and validate its performance against Surface Radiation (SURFRAD; Augustine et al., 2000, 2005) network observations. Typical errors range from 8.5% to 17.2% depending on the complexity of cloud regimes, and an operational demonstration outperformed persistence-based forecasting of Global Horizontal Irradiance (GHI) under all conditions by ~10 W/m2.« less
McCowan, Peter M; Asuni, Ganiyu; Van Uytven, Eric; VanBeek, Timothy; McCurdy, Boyd M C; Loewen, Shaun K; Ahmed, Naseer; Bashir, Bashir; Butler, James B; Chowdhury, Amitava; Dubey, Arbind; Leylek, Ahmet; Nashed, Maged
2017-04-01
To report findings from an in vivo dosimetry program implemented for all stereotactic body radiation therapy patients over a 31-month period and discuss the value and challenges of utilizing in vivo electronic portal imaging device (EPID) dosimetry clinically. From December 2013 to July 2016, 117 stereotactic body radiation therapy-volumetric modulated arc therapy patients (100 lung, 15 spine, and 2 liver) underwent 602 EPID-based in vivo dose verification events. A developed model-based dose reconstruction algorithm calculates the 3-dimensional dose distribution to the patient by back-projecting the primary fluence measured by the EPID during treatment. The EPID frame-averaging was optimized in June 2015. For each treatment, a 3%/3-mm γ comparison between our EPID-derived dose and the Eclipse AcurosXB-predicted dose to the planning target volume (PTV) and the ≥20% isodose volume were performed. Alert levels were defined as γ pass rates <85% (lung and liver) and <80% (spine). Investigations were carried out for all fractions exceeding the alert level and were classified as follows: EPID-related, algorithmic, patient setup, anatomic change, or unknown/unidentified errors. The percentages of fractions exceeding the alert levels were 22.6% for lung before frame-average optimization and 8.0% for lung, 20.0% for spine, and 10.0% for liver after frame-average optimization. Overall, mean (± standard deviation) planning target volume γ pass rates were 90.7% ± 9.2%, 87.0% ± 9.3%, and 91.2% ± 3.4% for the lung, spine, and liver patients, respectively. Results from the clinical implementation of our model-based in vivo dose verification method using on-treatment EPID images is reported. The method is demonstrated to be valuable for routine clinical use for verifying delivered dose as well as for detecting errors. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCowan, Peter M., E-mail: pmccowan@cancercare.mb.ca; Asuni, Ganiyu; Van Uytven, Eric
Purpose: To report findings from an in vivo dosimetry program implemented for all stereotactic body radiation therapy patients over a 31-month period and discuss the value and challenges of utilizing in vivo electronic portal imaging device (EPID) dosimetry clinically. Methods and Materials: From December 2013 to July 2016, 117 stereotactic body radiation therapy–volumetric modulated arc therapy patients (100 lung, 15 spine, and 2 liver) underwent 602 EPID-based in vivo dose verification events. A developed model-based dose reconstruction algorithm calculates the 3-dimensional dose distribution to the patient by back-projecting the primary fluence measured by the EPID during treatment. The EPID frame-averaging was optimized in Junemore » 2015. For each treatment, a 3%/3-mm γ comparison between our EPID-derived dose and the Eclipse AcurosXB–predicted dose to the planning target volume (PTV) and the ≥20% isodose volume were performed. Alert levels were defined as γ pass rates <85% (lung and liver) and <80% (spine). Investigations were carried out for all fractions exceeding the alert level and were classified as follows: EPID-related, algorithmic, patient setup, anatomic change, or unknown/unidentified errors. Results: The percentages of fractions exceeding the alert levels were 22.6% for lung before frame-average optimization and 8.0% for lung, 20.0% for spine, and 10.0% for liver after frame-average optimization. Overall, mean (± standard deviation) planning target volume γ pass rates were 90.7% ± 9.2%, 87.0% ± 9.3%, and 91.2% ± 3.4% for the lung, spine, and liver patients, respectively. Conclusions: Results from the clinical implementation of our model-based in vivo dose verification method using on-treatment EPID images is reported. The method is demonstrated to be valuable for routine clinical use for verifying delivered dose as well as for detecting errors.« less
Short-term solar irradiance forecasting via satellite/model coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Steven D.; Rogers, Matthew A.; Haynes, John M.
The short-term (0-3 h) prediction of solar insolation for renewable energy production is a problem well-suited to satellite-based techniques. The spatial, spectral, temporal and radiometric resolution of instrumentation hosted on the geostationary platform allows these satellites to describe the current cloud spatial distribution and optical properties. These properties relate directly to the transient properties of the downwelling solar irradiance at the surface, which come in the form of 'ramps' that pose a central challenge to energy load balancing in a spatially distributed network of solar farms. The short-term evolution of the cloud field may be approximated to first order simplymore » as translational, but care must be taken in how the advection is handled and where the impacts are assigned. In this research, we describe how geostationary satellite observations are used with operational cloud masking and retrieval algorithms, wind field data from Numerical Weather Prediction (NWP), and radiative transfer calculations to produce short-term forecasts of solar insolation for applications in solar power generation. The scheme utilizes retrieved cloud properties to group pixels into contiguous cloud objects whose future positions are predicted using four-dimensional (space + time) model wind fields, selecting steering levels corresponding to the cloud height properties of each cloud group. The shadows associated with these clouds are adjusted for sensor viewing parallax displacement and combined with solar geometry and terrain height to determine the actual location of cloud shadows. For mid/high-level clouds at mid-latitudes and high solar zenith angles, the combined displacements from these geometric considerations are non-negligible. The cloud information is used to initialize a radiative transfer model that computes the direct and diffuse-sky solar insolation at both shadow locations and intervening clear-sky regions. Here, we describe the formulation of the algorithm and validate its performance against Surface Radiation (SURFRAD; Augustine et al., 2000, 2005) network observations. Typical errors range from 8.5% to 17.2% depending on the complexity of cloud regimes, and an operational demonstration outperformed persistence-based forecasting of Global Horizontal Irradiance (GHI) under all conditions by ~10 W/m2.« less
A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging.
Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B; Jia, Xun
2014-07-01
4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3-0.5 mm for patients 1-3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1-1.5 min per phase. High-quality 4D-CBCT imaging based on the clinically standard 1-min 3D CBCT scanning protocol is feasible via the proposed hybrid reconstruction algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shepard, A; Matrosic, C; Zagzebski, J
Purpose: To develop an advanced testbed that combines a 3D motion stage and ultrasound phantom to optimize and validate 2D and 3D tracking algorithms for real-time motion management during radiation therapy. Methods: A Siemens S2000 Ultrasound scanner utilizing a 9L4 transducer was coupled with the Washington University 4D Phantom to simulate patient motion. The transducer was securely fastened to the 3D stage and positioned to image three cylinders of varying contrast in a Gammex 404GS LE phantom. The transducer was placed within a water bath above the phantom in order to maintain sufficient coupling for the entire range of simulatedmore » motion. A programmed motion sequence was used to move the transducer during image acquisition and a cine video was acquired for one minute to allow for long sequence tracking. Images were analyzed using a normalized cross-correlation block matching tracking algorithm and compared to the known motion of the transducer relative to the phantom. Results: The setup produced stable ultrasound motion traces consistent with those programmed into the 3D motion stage. The acquired ultrasound images showed minimal artifacts and an image quality that was more than suitable for tracking algorithm verification. Comparisons of a block matching tracking algorithm with the known motion trace for the three features resulted in an average tracking error of 0.59 mm. Conclusion: The high accuracy and programmability of the 4D phantom allows for the acquisition of ultrasound motion sequences that are highly customizable; allowing for focused analysis of some common pitfalls of tracking algorithms such as partial feature occlusion or feature disappearance, among others. The design can easily be modified to adapt to any probe such that the process can be extended to 3D acquisition. Further development of an anatomy specific phantom better resembling true anatomical landmarks could lead to an even more robust validation. This work is partially funded by NIH grant R01CA190298.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dise, J; Liang, X; Lin, L
Purpose: To evaluate an automatic interstitial catheter digitization algorithm that reduces treatment planning time and provide means for adaptive re-planning in HDR Brachytherapy of Gynecologic Cancers. Methods: The semi-automatic catheter digitization tool utilizes a region growing algorithm in conjunction with a spline model of the catheters. The CT images were first pre-processed to enhance the contrast between the catheters and soft tissue. Several seed locations were selected in each catheter for the region growing algorithm. The spline model of the catheters assisted in the region growing by preventing inter-catheter cross-over caused by air or metal artifacts. Source dwell positions frommore » day one CT scans were applied to subsequent CTs and forward calculated using the automatically digitized catheter positions. This method was applied to 10 patients who had received HDR interstitial brachytherapy on an IRB approved image-guided radiation therapy protocol. The prescribed dose was 18.75 or 20 Gy delivered in 5 fractions, twice daily, over 3 consecutive days. Dosimetric comparisons were made between automatic and manual digitization on day two CTs. Results: The region growing algorithm, assisted by the spline model of the catheters, was able to digitize all catheters. The difference between automatic and manually digitized positions was 0.8±0.3 mm. The digitization time ranged from 34 minutes to 43 minutes with a mean digitization time of 37 minutes. The bulk of the time was spent on manual selection of initial seed positions and spline parameter adjustments. There was no significance difference in dosimetric parameters between the automatic and manually digitized plans. D90% to the CTV was 91.5±4.4% for the manual digitization versus 91.4±4.4% for the automatic digitization (p=0.56). Conclusion: A region growing algorithm was developed to semi-automatically digitize interstitial catheters in HDR brachytherapy using the Syed-Neblett template. This automatic digitization tool was shown to be accurate compared to manual digitization.« less
Organic Scintillation Detectors for Spectroscopic Radiation Portal Monitors
NASA Astrophysics Data System (ADS)
Paff, Marc Gerrit
Thousands of radiation portal monitors have been deployed worldwide to detect and deter the smuggling of nuclear and radiological materials that could be used in nefarious acts. Radiation portal monitors are often installed at bottlenecks where large amounts of people or goods must traverse. Examples of use include scanning cargo containers at shipping ports, vehicles at border crossings, and people at high profile functions and events. Traditional radiation portal monitors contain separate detectors for passively measuring neutron and gamma ray count rates. 3He tubes embedded in polyethylene and slabs of plastic scintillators are the most common detector materials used in radiation portal monitors. The radiation portal monitor alarm mechanism relies on measuring radiation count rates above user defined alarm thresholds. These alarm thresholds are set above natural background count rates. Minimizing false alarms caused by natural background and maximizing sensitivity to weakly emitting threat sources must be balanced when setting these alarm thresholds. Current radiation portal monitor designs suffer from frequent nuisance radiation alarms. These radiation nuisance alarms are most frequently caused by shipments of large quantities of naturally occurring radioactive material containing cargo, like kitty litter, as well as by humans who have recently undergone a nuclear medicine procedure, particularly 99mTc treatments. Current radiation portal monitors typically lack spectroscopic capabilities, so nuisance alarms must be screened out in time-intensive secondary inspections with handheld radiation detectors. Radiation portal monitors using organic liquid scintillation detectors were designed, built, and tested. A number of algorithms were developed to perform on-the-fly radionuclide identification of single and combination radiation sources moving past the portal monitor at speeds up to 2.2 m/s. The portal monitor designs were tested extensively with a variety of shielded and unshielded radiation sources, including special nuclear material, at the European Commission Joint Research Centre in Ispra, Italy. Common medical isotopes were measured at the C.S. Mott Children's Hospital and added to the radionuclide identification algorithms.
Single-Pass Serial Scheduling Heuristic for Eglin AFB Range Services Division Schedule
2009-06-01
scheduling tool for this RCPSP. Research on a schedule improvement metaheuristic and coding of the complete algorithm is required before it can be...a schedule better by applying metaheuristic improvement algorithms to a feasible schedule after it is created. 2.5.1. Greedy Algorithm The...next available position, the algorithm will not utilize all the available range time and manpower. An improvement metaheuristic is required to
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.
Kim, Jinkwon; Min, Se Dong; Lee, Myoungho
2011-06-27
Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects
2011-01-01
Background Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. Methods In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. Results A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. Conclusions The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians. PMID:21707989
HEMODOSE: A Set of Multi-parameter Biodosimetry Tools
NASA Technical Reports Server (NTRS)
Hu, Shaowen; Blakely, William F.; Cucinotta, Francis A.
2014-01-01
After the events of September 11, 2001 and recent events at the Fukushima reactors in Japan, there is an increasing concern of the occurrence of nuclear and radiological terrorism or accidents that may result in large casualty in densely populated areas. To guide medical personnel in their clinical decisions for effective medical management and treatment of the exposed individuals, biological markers are usually applied to examine the radiation induced changes at different biological levels. Among these the peripheral blood cell counts are widely used to assess the extent of radiation induced injury. This is due to the fact that hematopoietic system is the most vulnerable part of the human body to radiation damage. Particularly, the lymphocyte, granulocyte, and platelet cells are the most radiosensitive of the blood elements, and monitoring their changes after exposure is regarded as the most practical and best laboratory test to estimate radiation dose. The HEMODOSE web tools are built upon solid physiological and pathophysiological understanding of mammalian hematopoietic systems, and rigorous coarse-grained biomathematical modeling and validation. Using single or serial granulocyte, lymphocyte, leukocyte, or platelet counts after exposure, these tools can estimate absorbed doses of adult victims very rapidly and accurately. Some patient data in historical accidents are utilized as examples to demonstrate the capabilities of these tools as a rapid point-of-care diagnostic or centralized high-throughput assay system in a large scale radiological disaster scenario. Unlike previous dose prediction algorithms, the HEMODOSE web tools establish robust correlations between the absorbed doses and victim's various types of blood cell counts not only in the early time window (1 or 2 days), but also in very late phase (up to 4 weeks) after exposure
NASA Astrophysics Data System (ADS)
Documet, Jorge R.; Liu, Brent; Le, Anh; Law, Maria
2008-03-01
During the last 2 years we have been working on developing a DICOM-RT (Radiation Therapy) ePR (Electronic Patient Record) with decision support that will allow physicists and radiation oncologists during their decision-making process. This ePR allows offline treatment dose calculations and plan evaluation, while at the same time it compares and quantifies treatment planning algorithms using DICOM-RT objects. The ePR framework permits the addition of visualization, processing, and analysis tools, which combined with the core functionality of reporting, importing and exporting of medical studies, creates a very powerful application that can improve the efficiency while planning cancer treatments. Usually a Radiation Oncology department will have disparate and complex data generated by the RT modalities as well as data scattered in RT Information/Management systems, Record & Verify systems, and Treatment Planning Systems (TPS) which can compromise the efficiency of the clinical workflow since the data crucial for a clinical decision may be time-consuming to retrieve, temporarily missing, or even lost. To address these shortcomings, the ACR-NEMA Standards Committee extended its DICOM (Digital Imaging & Communications in Medicine) standard from Radiology to RT by ratifying seven DICOM RT objects starting in 1997 [1,2]. However, they are not broadly used yet by the RT community in daily clinical operations. In the past, the research focus of an RT department has primarily been developing new protocols and devices to improve treatment process and outcomes of cancer patients with minimal effort dedicated to integration of imaging and information systems. Our attempt is to show a proof-of-concept that a DICOM-RT ePR system can be developed as a foundation to perform medical imaging informatics research in developing decision-support tools and knowledge base for future data mining applications.
Heat Transfer in High-Temperature Fibrous Insulation
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran
2002-01-01
The combined radiation/conduction heat transfer in high-porosity, high-temperature fibrous insulations was investigated experimentally and numerically. The effective thermal conductivity of fibrous insulation samples was measured over the temperature range of 300-1300 K and environmental pressure range of 1.33 x 10(exp -5)-101.32 kPa. The fibrous insulation samples tested had nominal densities of 24, 48, and 72 kilograms per cubic meter and thicknesses of 13.3, 26.6 and 39.9 millimeters. Seven samples were tested such that the applied heat flux vector was aligned with local gravity vector to eliminate natural convection as a mode of heat transfer. Two samples were tested with reverse orientation to investigate natural convection effects. It was determined that for the fibrous insulation densities and thicknesses investigated no heat transfer takes place through natural convection. A finite volume numerical model was developed to solve the governing combined radiation and conduction heat transfer equations. Various methods of modeling the gas/solid conduction interaction in fibrous insulations were investigated. The radiation heat transfer was modeled using the modified two-flux approximation assuming anisotropic scattering and gray medium. A genetic-algorithm based parameter estimation technique was utilized with this model to determine the relevant radiative properties of the fibrous insulation over the temperature range of 300-1300 K. The parameter estimation was performed by least square minimization of the difference between measured and predicted values of effective thermal conductivity at a density of 24 kilograms per cubic meters and at nominal pressures of 1.33 x 10(exp -4) and 99.98 kPa. The numerical model was validated by comparison with steady-state effective thermal conductivity measurements at other densities and pressures. The numerical model was also validated by comparison with a transient thermal test simulating reentry aerodynamic heating conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kieselmann, J; Bartzsch, S; Oelfke, U
Purpose: Microbeam Radiation Therapy is a preclinical method in radiation oncology that modulates radiation fields on a micrometre scale. Dose calculation is challenging due to arising dose gradients and therapeutically important dose ranges. Monte Carlo (MC) simulations, often used as gold standard, are computationally expensive and hence too slow for the optimisation of treatment parameters in future clinical applications. On the other hand, conventional kernel based dose calculation leads to inaccurate results close to material interfaces. The purpose of this work is to overcome these inaccuracies while keeping computation times low. Methods: A point kernel superposition algorithm is modified tomore » account for tissue inhomogeneities. Instead of conventional ray tracing approaches, methods from differential geometry are applied and the space around the primary photon interaction is locally warped. The performance of this approach is compared to MC simulations and a simple convolution algorithm (CA) for two different phantoms and photon spectra. Results: While peak doses of all dose calculation methods agreed within less than 4% deviations, the proposed approach surpassed a simple convolution algorithm in accuracy by a factor of up to 3 in the scatter dose. In a treatment geometry similar to possible future clinical situations differences between Monte Carlo and the differential geometry algorithm were less than 3%. At the same time the calculation time did not exceed 15 minutes. Conclusion: With the developed method it was possible to improve the dose calculation based on the CA method with respect to accuracy especially at sharp tissue boundaries. While the calculation is more extensive than for the CA method and depends on field size, the typical calculation time for a 20×20 mm{sup 2} field on a 3.4 GHz and 8 GByte RAM processor remained below 15 minutes. Parallelisation and optimisation of the algorithm could lead to further significant calculation time reductions.« less
Nuclear Power and the Environment--Questions and Answers.
ERIC Educational Resources Information Center
Campana, Robert J.; Langer, Sidney
This booklet has been developed to help the layman understand and evaluate the various efforts being undertaken to utilize nuclear power for the benefit of mankind. The question and answer format is utilized. Among the topics discussed are: Our Needs for Electricity; Sources of Radiation; Radiation from Nuclear Power Plants; Biological Effects of…
Method and apparatus for measuring electromagnetic radiation
NASA Technical Reports Server (NTRS)
Been, J. F. (Inventor)
1973-01-01
An apparatus and method are described in which the capacitance of a semiconductor junction subjected to an electromagnetic radiation field is utilized to indicate the intensity or strength of the radiation.
Imaging radiation detector with gain
Morris, C.L.; Idzorek, G.C.; Atencio, L.G.
1982-07-21
A radiation imaging device which has application in x-ray imaging. The device can be utilized in CAT scanners and other devices which require high sensitivity and low x-ray fluxes. The device utilizes cumulative multiplication of charge carriers on the anode plane and the collection of positive ion charges to image the radiation intensity on the cathode plane. Parallel and orthogonal cathode wire arrays are disclosed as well as a two-dimensional grid pattern for collecting the positive ions on the cathode.
Imaging radiation detector with gain
Morris, Christopher L.; Idzorek, George C.; Atencio, Leroy G.
1984-01-01
A radiation imaging device which has application in x-ray imaging. The device can be utilized in CAT scanners and other devices which require high sensitivity and low x-ray fluxes. The device utilizes cumulative multiplication of charge carriers on the anode plane and the collection of positive ion charges to image the radiation intensity on the cathode plane. Parallel and orthogonal cathode wire arrays are disclosed as well as a two-dimensional grid pattern for collecting the positive ions on the cathode.
Generation and assessment of turntable SAR data for the support of ATR development
NASA Astrophysics Data System (ADS)
Cohen, Marvin N.; Showman, Gregory A.; Sangston, K. James; Sylvester, Vincent B.; Gostin, Lamar; Scheer, C. Ruby
1998-10-01
Inverse synthetic aperture radar (ISAR) imaging on a turntable-tower test range permits convenient generation of high resolution two-dimensional images of radar targets under controlled conditions for testing SAR image processing and for supporting automatic target recognition (ATR) algorithm development. However, turntable ISAR images are often obtained under near-field geometries and hence may suffer geometric distortions not present in airborne SAR images. In this paper, turntable data collected at Georgia Tech's Electromagnetic Test Facility are used to begin to assess the utility of two- dimensional ISAR imaging algorithms in forming images to support ATR development. The imaging algorithms considered include a simple 2D discrete Fourier transform (DFT), a 2-D DFT with geometric correction based on image domain resampling, and a computationally-intensive geometric matched filter solution. Images formed with the various algorithms are used to develop ATR templates, which are then compared with an eye toward utilization in an ATR algorithm.
Adams, Bradley J; Aschheim, Kenneth W
2016-01-01
Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Abou-Khousa, M. A.; Zoughi, R.
2007-03-01
Non-invasive monitoring of dielectric slab thickness is of great interest in various industrial applications. This paper focuses on estimating the thickness of dielectric slabs, and consequently monitoring their variations, utilizing wideband microwave signals and the MUtiple SIgnal Characterization (MUSIC) algorithm. The performance of the proposed approach is assessed by validating simulation results with laboratory experiments. The results clearly indicate the utility of this overall approach for accurate dielectric slab thickness evaluation.
Fpack and Funpack Utilities for FITS Image Compression and Uncompression
NASA Technical Reports Server (NTRS)
Pence, W.
2008-01-01
Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.
ALARA radiation considerations for the AP600 reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lau, F.L.
1995-03-01
The radiation design of the AP600 reactor plant is based on an average annual occupational radiation exposure (ORE) of 100 man-rem. As a design goal we have established a lower value of 70 man-rem per year. And, with our current design process, we expect to achieve annual exposures which are well below this goal. To accomplish our goal we have established a process that provides criteria, guidelines and customer involvement to achieve the desired result. The criteria and guidelines provide the shield designer, as well as the systems and plant layout designers with information that will lead to an integratedmore » plant design that minimizes personnel exposure and yet is not burdened with complicated shielding or unnecessary component access limitations. Customer involvement is provided in the form of utility input, design reviews and information exchange. Cooperative programs with utilities in the development of specific systems or processes also provides for an ALARA design. The results are features which include ALARA radiation considerations as an integral part of the plant design and a lower plant ORE. It is anticipated that a further reduction in plant personnel exposures will result through good radiological practices by the plant operators. The information in place to support and direct the plant designers includes the Utility Requirements Document (URD), Federal Regulations, ALARA guidelines, radiation design information and radiation and shielding design criteria. This information, along with the utility input, design reviews and information feedback, will contribute to the reduction of plant radiation exposure levels such that they will be less than the stated goals.« less
SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugimoto, S; Inoue, T; Kurokawa, C
Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbalmore » motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.« less