From Earth to Mars, Radiation Intensities in Interplanetary Space
NASA Astrophysics Data System (ADS)
O'Brien, Keran
2007-10-01
The radiation field in interplanetary space between Earth and Mars is rather intense. Using a modified version of the ATROPOS Monte Carlo code combined with a modified version of the deterministic code, PLOTINUS, the effective dose rate to crew members in space craft hull shielded with a shell of 2 g/cm^2 of aluminum and 20 g/cm^2 of polyethylene was calculated to be 51 rem/y. The total dose during the solar-particle event of September 29, 1989, GLE 42, was calculated to be 50 rem. The dose in a ``storm cellar'' of 100 g/cm^2 of polyethylene equivalent during this time was calculated to be 5 rem. The calculations were for conditions corresponding to a recent solar minimum.
The Continuous Intercomparison of Radiation Codes (CIRC): Phase I Cases
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Turner, David D.; Miller, Mark A.; Minnis, Patrick; Clough, Shepard; Barker, Howard; Ellingson, Robert
2007-01-01
CIRC aspires to be the successor to ICRCCM (Intercomparison of Radiation Codes in Climate Models). It is envisioned as an evolving and regularly updated reference source for GCM-type radiative transfer (RT) code evaluation with the principle goal to contribute in the improvement of RT parameterizations. CIRC is jointly endorsed by DOE's Atmospheric Radiation Measurement (ARM) program and the GEWEX Radiation Panel (GRP). CIRC's goal is to provide test cases for which GCM RT algorithms should be performing at their best, i.e, well characterized clear-sky and homogeneous, overcast cloudy cases. What distinguishes CIRC from previous intercomparisons is that its pool of cases is based on observed datasets. The bulk of atmospheric and surface input as well as radiative fluxes come from ARM observations as documented in the Broadband Heating Rate Profile (BBHRP) product. BBHRP also provides reference calculations from AER's RRTM RT algorithms that can be used to select the most optimal set of cases and to provide a first-order estimate of our ability to achieve radiative flux closure given the limitations in our knowledge of the atmospheric state.
Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes
NASA Astrophysics Data System (ADS)
Aghara, S. K.; Sriprisan, S. I.; Singleterry, R. C.; Sato, T.
2015-01-01
Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm2 Al shield followed by 30 g/cm2 of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E < 100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results.
Development of a GPU Compatible Version of the Fast Radiation Code RRTMG
NASA Astrophysics Data System (ADS)
Iacono, M. J.; Mlawer, E. J.; Berthiaume, D.; Cady-Pereira, K. E.; Suarez, M.; Oreopoulos, L.; Lee, D.
2012-12-01
The absorption of solar radiation and emission/absorption of thermal radiation are crucial components of the physics that drive Earth's climate and weather. Therefore, accurate radiative transfer calculations are necessary for realistic climate and weather simulations. Efficient radiation codes have been developed for this purpose, but their accuracy requirements still necessitate that as much as 30% of the computational time of a GCM is spent computing radiative fluxes and heating rates. The overall computational expense constitutes a limitation on a GCM's predictive ability if it becomes an impediment to adding new physics to or increasing the spatial and/or vertical resolution of the model. The emergence of Graphics Processing Unit (GPU) technology, which will allow the parallel computation of multiple independent radiative calculations in a GCM, will lead to a fundamental change in the competition between accuracy and speed. Processing time previously consumed by radiative transfer will now be available for the modeling of other processes, such as physics parameterizations, without any sacrifice in the accuracy of the radiative transfer. Furthermore, fast radiation calculations can be performed much more frequently and will allow the modeling of radiative effects of rapid changes in the atmosphere. The fast radiation code RRTMG, developed at Atmospheric and Environmental Research (AER), is utilized operationally in many dynamical models throughout the world. We will present the results from the first stage of an effort to create a version of the RRTMG radiation code designed to run efficiently in a GPU environment. This effort will focus on the RRTMG implementation in GEOS-5. RRTMG has an internal pseudo-spectral vector of length of order 100 that, when combined with the much greater length of the global horizontal grid vector from which the radiation code is called in GEOS-5, makes RRTMG/GEOS-5 particularly suited to achieving a significant speed improvement through GPU technology. This large number of independent cases will allow us to take full advantage of the computational power of the latest GPUs, ensuring that all thread cores in the GPU remain active, a key criterion for obtaining significant speedup. The CUDA (Compute Unified Device Architecture) Fortran compiler developed by PGI and Nvidia will allow us to construct this parallel implementation on the GPU while remaining in the Fortran language. This implementation will scale very well across various CUDA-supported GPUs such as the recently released Fermi Nvidia cards. We will present the computational speed improvements of the GPU-compatible code relative to the standard CPU-based RRTMG with respect to a very large and diverse suite of atmospheric profiles. This suite will also be utilized to demonstrate the minimal impact of the code restructuring on the accuracy of radiation calculations. The GPU-compatible version of RRTMG will be directly applicable to future versions of GEOS-5, but it is also likely to provide significant associated benefits for other GCMs that employ RRTMG.
Shielding evaluation for solar particle events using MCNPX, PHITS and OLTARIS codes.
Aghara, S K; Sriprisan, S I; Singleterry, R C; Sato, T
2015-01-01
Detailed analyses of Solar Particle Events (SPE) were performed to calculate primary and secondary particle spectra behind aluminum, at various thicknesses in water. The simulations were based on Monte Carlo (MC) radiation transport codes, MCNPX 2.7.0 and PHITS 2.64, and the space radiation analysis website called OLTARIS (On-Line Tool for the Assessment of Radiation in Space) version 3.4 (uses deterministic code, HZETRN, for transport). The study is set to investigate the impact of SPEs spectra transporting through 10 or 20 g/cm(2) Al shield followed by 30 g/cm(2) of water slab. Four historical SPE events were selected and used as input source spectra particle differential spectra for protons, neutrons, and photons are presented. The total particle fluence as a function of depth is presented. In addition to particle flux, the dose and dose equivalent values are calculated and compared between the codes and with the other published results. Overall, the particle fluence spectra from all three codes show good agreement with the MC codes showing closer agreement compared to the OLTARIS results. The neutron particle fluence from OLTARIS is lower than the results from MC codes at lower energies (E<100 MeV). Based on mean square difference analysis the results from MCNPX and PHITS agree better for fluence, dose and dose equivalent when compared to OLTARIS results. Copyright © 2015 The Committee on Space Research (COSPAR). All rights reserved.
Recent Upgrades to the NASA Ames Mars General Circulation Model: Applications to Mars' Water Cycle
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, M. A.; Haberle, R. M.; Montmessin, F.; Wilson, R. J.; Schaeffer, J.
2008-09-01
We report on recent improvements to the NASA Ames Mars general circulation model (GCM), a robust 3D climate-modeling tool that is state-of-the-art in terms of its physics parameterizations and subgrid-scale processes, and which can be applied to investigate physical and dynamical processes of the present (and past) Mars climate system. The most recent version (gcm2.1, v.24) of the Ames Mars GCM utilizes a more generalized radiation code (based on a two-stream approximation with correlated k's); an updated transport scheme (van Leer formulation); a cloud microphysics scheme that assumes a log-normal particle size distribution whose first two moments are treated as atmospheric tracers, and which includes the nucleation, growth and sedimentation of ice crystals. Atmospheric aerosols (e.g., dust and water-ice) can either be radiatively active or inactive. We apply this version of the Ames GCM to investigate key aspects of the present water cycle on Mars. Atmospheric dust is partially interactive in our simulations; namely, the radiation code "sees" a prescribed distribution that follows the MGS thermal emission spectrometer (TES) year-one measurements with a self-consistent vertical depth scale that varies with season. The cloud microphysics code interacts with a transported dust tracer column whose surface source is adjusted to maintain the TES distribution. The model is run from an initially dry state with a better representation of the north residual cap (NRC) which accounts for both surface-ice and bare-soil components. A seasonally repeatable water cycle is obtained within five Mars years. Our sub-grid scale representation of the NRC provides for a more realistic flux of moisture to the atmosphere and a much drier water cycle consistent with recent spacecraft observations (e.g., Mars Express PFS, corrected MGS/TES) compared to models that assume a spatially uniform and homogeneous north residual polar cap.
Comparison of Transport Codes, HZETRN, HETC and FLUKA, Using 1977 GCR Solar Minimum Spectra
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Slaba, Tony C.; Tripathi, Ram K.; Blattnig, Steve R.; Norbury, John W.; Badavi, Francis F.; Townsend, Lawrence W.; Handler, Thomas; Gabriel, Tony A.; Pinsky, Lawrence S.;
2009-01-01
The HZETRN deterministic radiation transport code is one of several tools developed to analyze the effects of harmful galactic cosmic rays (GCR) and solar particle events (SPE) on mission planning, astronaut shielding and instrumentation. This paper is a comparison study involving the two Monte Carlo transport codes, HETC-HEDS and FLUKA, and the deterministic transport code, HZETRN. Each code is used to transport ions from the 1977 solar minimum GCR spectrum impinging upon a 20 g/cm2 Aluminum slab followed by a 30 g/cm2 water slab. This research is part of a systematic effort of verification and validation to quantify the accuracy of HZETRN and determine areas where it can be improved. Comparisons of dose and dose equivalent values at various depths in the water slab are presented in this report. This is followed by a comparison of the proton fluxes, and the forward, backward and total neutron fluxes at various depths in the water slab. Comparisons of the secondary light ion 2H, 3H, 3He and 4He fluxes are also examined.
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1990-01-01
Broad-band parameterizations for atmospheric radiative transfer were developed for clear and cloudy skies. These were in the shortwave and longwave regions of the spectrum. These models were compared with other models in an international effort called ICRCCM (Intercomparison of Radiation Codes for Climate Models). The radiation package developed was used for simulations of a General Circulation Model (GCM). A synopsis is provided of the research accomplishments in the two areas separately. Details are available in the published literature.
Meson Production and Space Radiation
NASA Astrophysics Data System (ADS)
Norbury, John; Blattnig, Steve; Norman, Ryan; Aghara, Sukesh
Protecting astronauts from the harmful effects of space radiation is an important priority for long duration space flight. The National Council on Radiation Protection (NCRP) has recently recommended that pion and other mesons should be included in space radiation transport codes, especially in connection with the Martian atmosphere. In an interesting accident of nature, the galactic cosmic ray spectrum has its peak intensity near the pion production threshold. The Boltzmann transport equation is structured in such a way that particle production cross sec-tions are multiplied by particle flux. Therefore, the peak of the incident flux of the galactic cosmic ray spectrum is more important than other regions of the spectrum and cross sections near the peak are enhanced. This happens with pion cross sections. The MCNPX Monte-Carlo transport code now has the capability of transporting heavy ions, and by using a galactic cosmic ray spectrum as input, recent work has shown that pions contribute about twenty percent of the dose from galactic cosmic rays behind a shield of 20 g/cm2 aluminum and 30 g/cm2 water. It is therefore important to include pion and other hadron production in transport codes designed for space radiation studies, such as HZETRN. The status of experimental hadron production data for energies relevant to space radiation will be reviewed, as well as the predictive capa-bilities of current theoretical hadron production cross section and space radiation transport models. Charged pions decay into muons and neutrinos, and neutral pions decay into photons. An electromagnetic cascade is produced as these particles build up in a material. The cascade and transport of pions, muons, electrons and photons will be discussed as they relate to space radiation. The importance of other hadrons, such as kaons, eta mesons and antiprotons will be considered as well. Efficient methods for calculating cross sections for meson production in nucleon-nucleon and nucleus-nucleus reactions will be presented. The NCRP has also recom-mended that more attention should be paid to neutron and light ion transport. The coupling of neutrons, light ions, mesons and other hadrons will be discussed.
Use of Fluka to Create Dose Calculations
NASA Technical Reports Server (NTRS)
Lee, Kerry T.; Barzilla, Janet; Townsend, Lawrence; Brittingham, John
2012-01-01
Monte Carlo codes provide an effective means of modeling three dimensional radiation transport; however, their use is both time- and resource-intensive. The creation of a lookup table or parameterization from Monte Carlo simulation allows users to perform calculations with Monte Carlo results without replicating lengthy calculations. FLUKA Monte Carlo transport code was used to develop lookup tables and parameterizations for data resulting from the penetration of layers of aluminum, polyethylene, and water with areal densities ranging from 0 to 100 g/cm^2. Heavy charged ion radiation including ions from Z=1 to Z=26 and from 0.1 to 10 GeV/nucleon were simulated. Dose, dose equivalent, and fluence as a function of particle identity, energy, and scattering angle were examined at various depths. Calculations were compared against well-known results and against the results of other deterministic and Monte Carlo codes. Results will be presented.
The Response of a Spectral General Circulation Model to Refinements in Radiative Processes.
NASA Astrophysics Data System (ADS)
Ramanathan, V.; Pitcher, Eric J.; Malone, Robert C.; Blackmon, Maurice L.
1983-03-01
We present here results and analyses of a series of numerical experiments performed with a spectral general circulation model (GCM). The purpose of the GCM experiments is to examine the role of radiation/cloud processes in the general circulation of the troposphere and stratosphere. The experiments were primarily motivated by the significant improvements in the GCM zonal mean simulation as refinements were made in the model treatment of clear-sky radiation and cloud-radiative interactions. The GCM with the improved cloud/radiation model is able to reproduce many observed features, such as: a clear separation between the wintertime tropospheric jet and the polar night jet; winter polar stratospheric temperatures of about 200 K; interhemispheric and seasonal asymmetries in the zonal winds.In a set of sensitivity experiments, we have stripped the cloud/radiation model of its improvements, the result being a significant degradation of the zonal mean simulations by the GCM. Through these experiments we have been able to identify the processes that are responsible for the improved GCM simulations: (i) careful treatment of the upper boundary condition for O3 solar heating; (ii) temperature dependence of longwave cooling by CO2 15 m bands., (iii) vertical distribution of H2O that minimizes the lower stratospheric H2O longwave cooling; (iv) dependence of cirrus emissivity on cloud liquid water content.Comparison of the GCM simulations, with and without the cloud/radiation improvements, reveals the nature and magnitude of the following radiative-dynamical interactions: (i) the temperature decrease (due to errors in radiative heating) within the winter polar stratosphere is much larger than can be accounted for by purely radiative adjustment; (ii) the role of dynamics in maintaining the winter polar stratosphere thermal structure is greatly diminished in the GCM with the degraded treatment of radiation; (iii) the radiative and radiative-dynamical response times of the atmosphere vary from periods of less than two weeks in the lower troposphere to roughly three months in the polar lower stratosphere; (iv) within the stratosphere, the radiative response times vary significantly with temperature, with the winter polar values larger than the summer polar values by as much as a factor of 2.5.Cirrus clouds, if their emissivities are arbitrarily prescribed to be black, unrealistically enhance the radiative cooling of the polar troposphere above 8 km. This results in a meridional temperature gradient much stronger than that which is observed. We employ a more realistic parameterization that accounts for the non-blackness of cirrus, and we describe the resulting improvements in the model simulation of zonal winds, temperatures, and radiation budget.
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Wilson, J. W.; Shinn, J. L.; Badavi, F. F.; Badhwar, G. D.
1996-01-01
We present calculations of linear energy transfer (LET) spectra in low earth orbit from galactic cosmic rays and trapped protons using the HZETRN/BRYNTRN computer code. The emphasis of our calculations is on the analysis of the effects of secondary nuclei produced through target fragmentation in the spacecraft shield or detectors. Recent improvements in the HZETRN/BRYNTRN radiation transport computer code are described. Calculations show that at large values of LET (> 100 keV/micrometer) the LET spectra seen in free space and low earth orbit (LEO) are dominated by target fragments and not the primary nuclei. Although the evaluation of microdosimetric spectra is not considered here, calculations of LET spectra support that the large lineal energy (y) events are dominated by the target fragments. Finally, we discuss the situation for interplanetary exposures to galactic cosmic rays and show that current radiation transport codes predict that in the region of high LET values the LET spectra at significant shield depths (> 10 g/cm2 of Al) is greatly modified by target fragments. These results suggest that studies of track structure and biological response of space radiation should place emphasis on short tracks of medium charge fragments produced in the human body by high energy protons and neutrons.
Thick Galactic Cosmic Radiation Shielding Using Atmospheric Data
NASA Technical Reports Server (NTRS)
Youngquist, Robert C.; Nurge, Mark A.; Starr, Stanley O.; Koontz, Steven L.
2013-01-01
NASA is concerned with protecting astronauts from the effects of galactic cosmic radiation and has expended substantial effort in the development of computer models to predict the shielding obtained from various materials. However, these models were only developed for shields up to about 120 g!cm2 in thickness and have predicted that shields of this thickness are insufficient to provide adequate protection for extended deep space flights. Consequently, effort is underway to extend the range of these models to thicker shields and experimental data is required to help confirm the resulting code. In this paper empirically obtained effective dose measurements from aircraft flights in the atmosphere are used to obtain the radiation shielding function of the earth's atmosphere, a very thick shield. Obtaining this result required solving an inverse problem and the method for solving it is presented. The results are shown to be in agreement with current code in the ranges where they overlap. These results are then checked and used to predict the radiation dosage under thick shields such as planetary regolith and the atmosphere of Venus.
Comparison of Radiation Transport Codes, HZETRN, HETC and FLUKA, Using the 1956 Webber SPE Spectrum
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Slaba, Tony C.; Blattnig, Steve R.; Tripathi, Ram K.; Townsend, Lawrence W.; Handler, Thomas; Gabriel, Tony A.; Pinsky, Lawrence S.; Reddell, Brandon; Clowdsley, Martha S.;
2009-01-01
Protection of astronauts and instrumentation from galactic cosmic rays (GCR) and solar particle events (SPE) in the harsh environment of space is of prime importance in the design of personal shielding, spacec raft, and mission planning. Early entry of radiation constraints into the design process enables optimal shielding strategies, but demands efficient and accurate tools that can be used by design engineers in every phase of an evolving space project. The radiation transport code , HZETRN, is an efficient tool for analyzing the shielding effectiveness of materials exposed to space radiation. In this paper, HZETRN is compared to the Monte Carlo codes HETC-HEDS and FLUKA, for a shield/target configuration comprised of a 20 g/sq cm Aluminum slab in front of a 30 g/cm^2 slab of water exposed to the February 1956 SPE, as mode led by the Webber spectrum. Neutron and proton fluence spectra, as well as dose and dose equivalent values, are compared at various depths in the water target. This study shows that there are many regions where HZETRN agrees with both HETC-HEDS and FLUKA for this shield/target configuration and the SPE environment. However, there are also regions where there are appreciable differences between the three computer c odes.
Longwave Band-by-band Cloud Radiative Effect and its Application in GCM Evaluation
NASA Technical Reports Server (NTRS)
Huang, Xianglei; Cole, Jason N. S.; He, Fei; Potter, Gerald L.; Oreopoulos, Lazaros; Lee, Dongmin; Suarez, Max; Loeb, Norman G.
2012-01-01
The cloud radiative effect (CRE) of each longwave (LW) absorption band of a GCM fs radiation code is uniquely valuable for GCM evaluation because (1) comparing band-by-band CRE avoids the compensating biases in the broadband CRE comparison and (2) the fractional contribution of each band to the LW broadband CRE (f(sub CRE)) is sensitive to cloud top height but largely insensitive to cloud fraction, presenting thus a diagnostic metric to separate the two macroscopic properties of clouds. Recent studies led by the first author have established methods to derive such band ]by ]band quantities from collocated AIRS and CERES observations. We present here a study that compares the observed band-by-band CRE over the tropical oceans with those simulated by three different atmospheric GCMs (GFDL AM2, NASA GEOS-5, and CCCma CanAM4) forced by observed SST. The models agree with observation on the annual ]mean LW broadband CRE over the tropical oceans within +/-1W/sq m. However, the differences among these three GCMs in some bands can be as large as or even larger than +/-1W/sq m. Observed seasonal cycles of f(sub CRE) in major bands are shown to be consistent with the seasonal cycle of cloud top pressure for both the amplitude and the phase. However, while the three simulated seasonal cycles of f(sub CRE) agree with observations on the phase, the amplitudes are underestimated. Simulated interannual anomalies from GFDL AM2 and CCCma CanAM4 are in phase with observed anomalies. The spatial distribution of f(sub CRE) highlights the discrepancies between models and observation over the low-cloud regions and the compensating biases from different bands.
The role of global cloud climatologies in validating numerical models
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1991-01-01
Reliable estimates of the components of the surface radiation budget are important in studies of ocean-atmosphere interaction, land-atmosphere interaction, ocean circulation and in the validation of radiation schemes used in climate models. The methods currently under consideration must necessarily make certain assumptions regarding both the presence of clouds and their vertical extent. Because of the uncertainties in assumed cloudiness, all these methods involve perhaps unacceptable uncertainties. Here, a theoretical framework that avoids the explicit computation of cloud fraction and the location of cloud base in estimating the surface longwave radiation is presented. Estimates of the global surface downward fluxes and the oceanic surface net upward fluxes were made for four months (April, July, October and January) in 1985 to 1986. These estimates are based on a relationship between cloud radiative forcing at the top of the atmosphere and the surface obtained from a general circulation model. The radiation code is the version used in the UCLA/GLA general circulation model (GCM). The longwave cloud radiative forcing at the top of the atmosphere as obtained from Earth Radiation Budget Experiment (ERBE) measurements is used to compute the forcing at the surface by means of the GCM-derived relationship. This, along with clear-sky fluxes from the computations, yield maps of the downward longwave fluxes and net upward longwave fluxes at the surface. The calculated results are discussed and analyzed. The results are consistent with current meteorological knowledge and explainable on the basis of previous theoretical and observational works; therefore, it can be concluded that this method is applicable as one of the ways to obtain the surface longwave radiation fields from currently available satellite data.
Early Results from the Advanced Radiation Protection Thick GCR Shielding Project
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Clowdsley, Martha; Slaba, Tony; Heilbronn, Lawrence; Zeitlin, Cary; Kenny, Sean; Crespo, Luis; Giesy, Daniel; Warner, James; McGirl, Natalie;
2017-01-01
The Advanced Radiation Protection Thick Galactic Cosmic Ray (GCR) Shielding Project leverages experimental and modeling approaches to validate a predicted minimum in the radiation exposure versus shielding depth curve. Preliminary results of space radiation models indicate that a minimum in the dose equivalent versus aluminum shielding thickness may exist in the 20-30 g/cm2 region. For greater shield thickness, dose equivalent increases due to secondary neutron and light particle production. This result goes against the long held belief in the space radiation shielding community that increasing shielding thickness will decrease risk to crew health. A comprehensive modeling effort was undertaken to verify the preliminary modeling results using multiple Monte Carlo and deterministic space radiation transport codes. These results verified the preliminary findings of a minimum and helped drive the design of the experimental component of the project. In first-of-their-kind experiments performed at the NASA Space Radiation Laboratory, neutrons and light ions were measured between large thicknesses of aluminum shielding. Both an upstream and a downstream shield were incorporated into the experiment to represent the radiation environment inside a spacecraft. These measurements are used to validate the Monte Carlo codes and derive uncertainty distributions for exposure estimates behind thick shielding similar to that provided by spacecraft on a Mars mission. Preliminary results for all aspects of the project will be presented.
A one-dimensional interactive soil-atmosphere model for testing formulations of surface hydrology
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Eagleson, Peter S.
1990-01-01
A model representing a soil-atmosphere column in a GCM is developed for off-line testing of GCM soil hydrology parameterizations. Repeating three representative GCM sensitivity experiments with this one-dimensional model demonstrates that, to first order, the model reproduces a GCM's sensitivity to imposed changes in parameterization and therefore captures the essential physics of the GCM. The experiments also show that by allowing feedback between the soil and atmosphere, the model improves on off-line tests that rely on prescribed precipitation, radiation, and other surface forcing.
Modeling radiative transfer with the doubling and adding approach in a climate GCM setting
NASA Astrophysics Data System (ADS)
Lacis, A. A.
2017-12-01
The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.
A review of recent research on improvement of physical parameterizations in the GLA GCM
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.
1990-01-01
A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.
NASA Technical Reports Server (NTRS)
Stephens, Graeme L.; Randall, David A.; Wittmeyer, Ian L.; Dazlich, Donald A.; Tjemkes, Stephen
1993-01-01
The ability of the Colorado State University general circulation model (GCM) to simulate interactions between the hydrological cycle and the radiative processes on earth was examined by comparing various sensitivity relationships established by the model with those observed on earth, and the observed and calculated seasonal cycles of the greenhouse effect and cloud radiative forcing. Results showed that, although the GCM model used was able to simulate well some aspects of the observed sensitivities, there were many serious quantitative differences, including problems in the simulation of the column vapor in the tropics and an excessively strong clear-sky greenhouse effect in the mid-latitudes. These differences led to an underestimation by the model of the sensitivity of the clear-sky greenhouse to changes in sea surface temperature.
NASA Technical Reports Server (NTRS)
Sato, N.; Sellers, P. J.; Randall, D. A.; Schneider, E. K.; Shukla, J.; Kinter, J. L., III; Hou, Y.-T.; Albertazzi, E.
1989-01-01
The Simple Biosphere MOdel (SiB) of Sellers et al., (1986) was designed to simulate the interactions between the Earth's land surface and the atmosphere by treating the vegetation explicitly and relistically, thereby incorporating biophysical controls on the exchanges of radiation, momentum, sensible and latent heat between the two systems. The steps taken to implement SiB in a modified version of the National Meteorological Center's spectral GCM are described. The coupled model (SiB-GCM) was used with a conventional hydrological model (Ctl-GCM) to produce summer and winter simulations. The same GCM was used with a conventional hydrological model (Ctl-GCM) to produce comparable 'control' summer and winter variations. It was found that SiB-GCM produced a more realistic partitioning of energy at the land surface than Ctl-GCM. Generally, SiB-GCM produced more sensible heat flux and less latent heat flux over vegetated land than did Ctl-GCM and this resulted in the development of a much deeper daytime planetary boundary and reduced precipitation rates over the continents in SiB-GCM. In the summer simulation, the 200 mb jet stream and the wind speed at 850 mb were slightly weakened in the SiB-GCM relative to the Ctl-GCM results and equivalent analyses from observations.
The Impact of Desert Dust Aerosol Radiative Forcing on Global and West African Precipitation
NASA Astrophysics Data System (ADS)
Jordan, A.; Zaitchik, B. F.; Gnanadesikan, A.; Dezfuli, A. K.
2015-12-01
Desert dust aerosols exert a radiative forcing on the atmosphere, influencing atmospheric temperature structure and modifying radiative fluxes at the top of the atmosphere (TOA) and surface. As dust aerosols perturb radiative fluxes, the atmosphere responds by altering both energy and moisture dynamics, with potentially significant impacts on regional and global precipitation. Global Climate Model (GCM) experiments designed to characterize these processes have yielded a wide range of results, owing to both the complex nature of the system and diverse differences across models. Most model results show a general decrease in global precipitation, but regional results vary. Here, we compare simulations from GFDL's CM2Mc GCM with multiple other model experiments from the literature in order to investigate mechanisms of radiative impact and reasons for GCM differences on a global and regional scale. We focus on West Africa, a region of high interannual rainfall variability that is a source of dust and that neighbors major Sahara Desert dust sources. As such, changes in West African climate due to radiative forcing of desert dust aerosol have serious implications for desertification feedbacks. Our CM2Mc results show net cooling of the planet at TOA and surface, net warming of the atmosphere, and significant increases in precipitation over West Africa during the summer rainy season. These results differ from some previous GCM studies, prompting comparative analysis of desert dust parameters across models. This presentation will offer quantitative analysis of differences in dust aerosol parameters, aerosol optical properties, and overall particle burden across GCMs, and will characterize the contribution of model differences to the uncertainty of forcing and climate response affecting West Africa.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stull, R.B.
1993-08-27
This document is a progress report to the USDOE Atmospheric Radiation and Measurement Program (ARM). The overall project goal is to relate subgrid-cumulus-cloud formation, coverage, and population characteristics to statistical properties of surface-layer air, which in turn are modulated by heterogeneous land-usage within GCM-grid-box-size regions. The motivation is to improve the understanding and prediction of climate change by more accurately describing radiative and cloud processes.
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Suarez, Max; Sawyer, William; Govindaraju, Ravi C.
1999-01-01
The results obtained with the variable resolution stretched grid (SG) GEOS GCM (Goddard Earth Observing System General Circulation Models) are discussed, with the emphasis on the regional down-scaling effects and their dependence on the stretched grid design and parameters. A variable resolution SG-GCM and SG-DAS using a global stretched grid with fine resolution over an area of interest, is a viable new approach to REGIONAL and subregional CLIMATE studies and applications. The stretched grid approach is an ideal tool for representing regional to global scale interactions. It is an alternative to the widely used nested grid approach introduced a decade ago as a pioneering step in regional climate modeling. The GEOS SG-GCM is used for simulations of the anomalous U.S. climate events of 1988 drought and 1993 flood, with enhanced regional resolution. The height low level jet, precipitation and other diagnostic patterns are successfully simulated and show the efficient down-scaling over the area of interest the U.S. An imitation of the nested grid approach is performed using the developed SG-DAS (Data Assimilation System) that incorporates the SG-GCM. The SG-DAS is run with withholding data over the area of interest. The design immitates the nested grid framework with boundary conditions provided from analyses. No boundary condition buffer is needed for the case due to the global domain of integration used for the SG-GCM and SG-DAS. The experiments based on the newly developed versions of the GEOS SG-GCM and SG-DAS, with finer 0.5 degree (and higher) regional resolution, are briefly discussed. The major aspects of parallelization of the SG-GCM code are outlined. The KEY OBJECTIVES of the study are: 1) obtaining an efficient DOWN-SCALING over the area of interest with fine and very fine resolution; 2) providing CONSISTENT interactions between regional and global scales including the consistent representation of regional ENERGY and WATER BALANCES; 3) providing a high computational efficiency for future SG-GCM and SG-DAS versions using PARALLEL codes.
A GCM simulation of the earth-atmosphere radiation balance for winter and summer
NASA Technical Reports Server (NTRS)
Wu, M. L. C.
1979-01-01
The radiation balance of the earth-atmosphere system simulated by using the general circulation model (GCM) of the Laboratory for Atmospheric Sciences (GLAS) is examined in regards to its graphical distribution, zonally-averaged distribution, and global mean. Most of the main features of the radiation balance at the top of the atmosphere are reasonably simulated, with some differences in the detailed structure of the patterns and intensities for both summer and winter in comparison with values as derived from Nimbus and NOAA (National Oceanic and Atmospheric Administration) satellite observations. Both the capability and defects of the model are discussed.
Analytical-HZETRN Model for Rapid Assessment of Active Magnetic Radiation Shielding
NASA Technical Reports Server (NTRS)
Washburn, S. A.; Blattnig, S. R.; Singleterry, R. C.; Westover, S. C.
2014-01-01
The use of active radiation shielding designs has the potential to reduce the radiation exposure received by astronauts on deep-space missions at a significantly lower mass penalty than designs utilizing only passive shielding. Unfortunately, the determination of the radiation exposure inside these shielded environments often involves lengthy and computationally intensive Monte Carlo analysis. In order to evaluate the large trade space of design parameters associated with a magnetic radiation shield design, an analytical model was developed for the determination of flux inside a solenoid magnetic field due to the Galactic Cosmic Radiation (GCR) radiation environment. This analytical model was then coupled with NASA's radiation transport code, HZETRN, to account for the effects of passive/structural shielding mass. The resulting model can rapidly obtain results for a given configuration and can therefore be used to analyze an entire trade space of potential variables in less time than is required for even a single Monte Carlo run. Analyzing this trade space for a solenoid magnetic shield design indicates that active shield bending powers greater than 15 Tm and passive/structural shielding thicknesses greater than 40 g/cm2 have a limited impact on reducing dose equivalent values. Also, it is shown that higher magnetic field strengths are more effective than thicker magnetic fields at reducing dose equivalent.
From GCM grid cell to agricultural plot: scale issues affecting modelling of climate impact
Baron, Christian; Sultan, Benjamin; Balme, Maud; Sarr, Benoit; Traore, Seydou; Lebel, Thierry; Janicot, Serge; Dingkuhn, Michael
2005-01-01
General circulation models (GCM) are increasingly capable of making relevant predictions of seasonal and long-term climate variability, thus improving prospects of predicting impact on crop yields. This is particularly important for semi-arid West Africa where climate variability and drought threaten food security. Translating GCM outputs into attainable crop yields is difficult because GCM grid boxes are of larger scale than the processes governing yield, involving partitioning of rain among runoff, evaporation, transpiration, drainage and storage at plot scale. This study analyses the bias introduced to crop simulation when climatic data is aggregated spatially or in time, resulting in loss of relevant variation. A detailed case study was conducted using historical weather data for Senegal, applied to the crop model SARRA-H (version for millet). The study was then extended to a 10°N–17° N climatic gradient and a 31 year climate sequence to evaluate yield sensitivity to the variability of solar radiation and rainfall. Finally, a down-scaling model called LGO (Lebel–Guillot–Onibon), generating local rain patterns from grid cell means, was used to restore the variability lost by aggregation. Results indicate that forcing the crop model with spatially aggregated rainfall causes yield overestimations of 10–50% in dry latitudes, but nearly none in humid zones, due to a biased fraction of rainfall available for crop transpiration. Aggregation of solar radiation data caused significant bias in wetter zones where radiation was limiting yield. Where climatic gradients are steep, these two situations can occur within the same GCM grid cell. Disaggregation of grid cell means into a pattern of virtual synoptic stations having high-resolution rainfall distribution removed much of the bias caused by aggregation and gave realistic simulations of yield. It is concluded that coupling of GCM outputs with plot level crop models can cause large systematic errors due to scale incompatibility. These errors can be avoided by transforming GCM outputs, especially rainfall, to simulate the variability found at plot level. PMID:16433096
NASA Astrophysics Data System (ADS)
Stanfield, Ryan Evan
Global circulation/climate models (GCMs) remain as an invaluable tool to predict future potential climate change. To best advise policy makers, assessing and increasing the accuracy of climate models is paramount. The treatment of clouds, radiation and precipitation in climate models and their associated feedbacks have long been one of the largest sources of uncertainty in predicting any potential future climate changes. Three versions of the NASA GISS ModelE GCM (the frozen CMIP5 version [C5], a post-CMIP5 version with modifications to cumulus and boundary layer turbulence parameterizations [P5], and the most recent version of the GCM which builds on the post-CMIP5 version with further modifications to convective cloud ice and cold pool parameterizations [E5]) have been compared with various satellite observations to analyze how recent modifications to the GCM has impacted cloud, radiation, and precipitation properties. In addition to global comparisons, two areas are showcased in regional analyses: the Eastern Pacific Northern ITCZ (EP-ITCZ), and Indonesia and the Western Pacific (INDO-WP). Changes to the cumulus and boundary layer turbulence parameterizations in the P5 version of the GCM have improved cloud and radiation estimations in areas of descending motion, such as the Southern Mid-Latitudes. Ice particle size and fall speed modifications in the E5 version of the GCM have decreased ice cloud water contents and cloud fractions globally while increasing precipitable water vapor in the model. Comparisons of IWC profiles show that the GCM simulated IWCs increase with height and peak in the upper portions of the atmosphere, while 2C-ICE observations peak in the lower levels of the atmosphere and decrease with height, effectively opposite of each other. Profiles of CF peak at lower heights in the E5 simulation, which will potentially increase outgoing longwave radiation due to higher cloud top temperatures, which will counterbalance the decrease in reflected shortwave associated with lower CFs and the thinner optical depths associated with decreased IWC and LWC in the E5 simulation. Vertical motion within the newest E5 simulation is greatly weakened over the EP-ITCZ region, potentially due to atmospheric loading from enhanced ice particle fall speeds. Comparatively, E5 simulated upward motion in the INDO-WP is stronger than its predecessors. Changes in the E5 simulation have resulted in stronger/weaker upward motion over the ocean/land in the INDO-WP region in comparison with both the C5 and P5 predecessors. Multimodel precipitation analysis shows that most of the GCMs tend to produce a wider ITCZ with stronger precipitation compared to GPCP and TRMM precipitation products. E5-simulated precipitation decreases and shifts Southward over the Easter Pacific ITCZ, which warrants further investigation into meridional heat transport and radiation fields.
Testing the Role of Impacts in Warming Early Mars: Comparisons Between 1-D and GCM Results
NASA Astrophysics Data System (ADS)
Steakley, K.; Kahre, M. A.; Murphy, J. R.; Haberle, R. M.; Kling, A.
2017-12-01
Comet and asteroid impacts have been explored as a potential mechanism for producing warmer and wetter conditions for early Mars and possibly contributing to valley network formation. However, criticisms have been made regarding the timing of large impacts compared to valley network activity and the ability of such impacts to induce long lasting climate changes and the appropriate amount of precipitation. We test the impact heating hypothesis for the late Noachian Mars atmosphere by revisiting the scenarios described in Segura et al. (2008, JGR Planets 113, E11007) with a 3D global climate model (GCM). Segura et al. (2008) showed with a 1-D model that impacts ranging 30-100 km in diameter could in certain cases induce months to years of above-freezing temperatures and tens of cm to meters of rainfall in atmospheres with 150-mbar, 1-bar, or 2-bar surface pressures. We impose the same initial conditions into the Ames Research Center Mars GCM with updated water cycle physics that includes bulk cloud formation, sedimentation, precipitation (liquid or snow), a Manabe moist convection scheme, and the radiative effects of both liquid and ice clouds. Initial conditions in the GCM match those described in Segura et al. (2008) as closely as possible and include a hot post-impact debris layer, a warm atmosphere, and water vapor profiles consistent with the water abundances mobilized by the impact. Scenarios with 30-, 50- and 100- km impactors in 150-mbar, 1-bar, and 2-bar surface pressure cases are explored both with and without radiatively active water clouds. Our goals are to determine how global rainfall totals and global surface temperatures from the GCM compare with the simpler 1-D Segura et al. (2008) model, to examine what rainfall patterns emerge in the GCM and how they compare to the observed valley network distribution, and to more carefully assess the role of cloud microphysics and radiative effects on the duration and intensity of post-impact climates.
The role of global cloud climatologies in validating numerical models
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1991-01-01
The net upward longwave surface radiation is exceedingly difficult to measure from space. A hybrid method using General Circulation Model (GCM) simulations and satellite data from the Earth Radiation Budget Experiment (ERBE) and the International Satellite Cloud Climatology Project (ISCCP) was used to produce global maps of this quantity over oceanic areas. An advantage of this technique is that no independent knowledge or assumptions regarding cloud cover for a particular month are required. The only information required is a relationship between the cloud radiation forcing (CRF) at the top of the atmosphere and that at the surface, which is obtained from the GCM simulation. A flow diagram of the technique and results are given.
Shielding of relativistic protons.
Bertucci, A; Durante, M; Gialanella, G; Grossi, G; Manti, L; Pugliese, M; Scampoli, P; Mancusi, D; Sihver, L; Rusek, A
2007-06-01
Protons are the most abundant element in the galactic cosmic radiation, and the energy spectrum peaks around 1 GeV. Shielding of relativistic protons is therefore a key problem in the radiation protection strategy of crewmembers involved in long-term missions in deep space. Hydrogen ions were accelerated up to 1 GeV at the NASA Space Radiation Laboratory, Brookhaven National Laboratory, New York. The proton beam was also shielded with thick (about 20 g/cm2) blocks of lucite (PMMA) or aluminium (Al). We found that the dose rate was increased 40-60% by the shielding and decreased as a function of the distance along the axis. Simulations using the General-Purpose Particle and Heavy-Ion Transport code System (PHITS) show that the dose increase is mostly caused by secondary protons emitted by the target. The modified radiation field after the shield has been characterized for its biological effectiveness by measuring chromosomal aberrations in human peripheral blood lymphocytes exposed just behind the shield block, or to the direct beam, in the dose range 0.5-3 Gy. Notwithstanding the increased dose per incident proton, the fraction of aberrant cells at the same dose in the sample position was not significantly modified by the shield. The PHITS code simulations show that, albeit secondary protons are slower than incident nuclei, the LET spectrum is still contained in the low-LET range (<10 keV/microm), which explains the approximately unitary value measured for the relative biological effectiveness.
Kartashov, D A; Shurshakov, V A
2015-01-01
The paper presents the results of calculating doses from space ionizing radiation for a modeled orbital station cabin outfitted with an additional shield aimed to reduce radiation loads on cosmonaut. The shield is a layer with the mass thickness of -6 g/cm2 (mean density = 0.62 g/cm3) that covers the outer cabin wall and consists of wet tissues and towels used by cosmonauts for hygienic purposes. A tissue-equivalent anthropomorphic phantom imitates human body. Doses were calculated for the standard orbit of the International space station (ISS) with consideration of the longitudinal and transverse phantom orientation relative to the wall with or without the additional shield. Calculation of dose distribution in the human body improves prediction of radiation loads. The additional shield reduces radiation exposure of human critical organs by -20% depending on their depth and body spatial orientation in the ISS compartment.
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
NASA Technical Reports Server (NTRS)
Suarez, Max J. (Editor); Yang, Wei-Yu; Todling, Ricardo; Navon, I. Michael
1997-01-01
A detailed description of the development of the tangent linear model (TLM) and its adjoint model of the Relaxed Arakawa-Schubert moisture parameterization package used in the NASA GEOS-1 C-Grid GCM (Version 5.2) is presented. The notational conventions used in the TLM and its adjoint codes are described in detail.
NASA Technical Reports Server (NTRS)
Liao, Hong; Seinfeld, John H.; Adams, Peter J.; Mickley, Loretta J.
2008-01-01
Global simulations of sea salt and mineral dust aerosols are integrated into a previously developed unified general circulation model (GCM), the Goddard Institute for Space Studies (GISS) GCM II', that simulates coupled tropospheric ozone-NOx-hydrocarbon chemistry and sulfate, nitrate, ammonium, black carbon, primary organic carbon, and secondary organic carbon aerosols. The fully coupled gas-aerosol unified GCM allows one to evaluate the extent to which global burdens, radiative forcing, and eventually climate feedbacks of ozone and aerosols are influenced by gas-aerosol chemical interactions. Estimated present-day global burdens of sea salt and mineral dust are 6.93 and 18.1 Tg with lifetimes of 0.4 and 3.9 days, respectively. The GCM is applied to estimate current top of atmosphere (TOA) and surface radiative forcing by tropospheric ozone and all natural and anthropogenic aerosol components. The global annual mean value of the radiative forcing by tropospheric ozone is estimated to be +0.53 W m(sup -2) at TOA and +0.07 W m(sup -2) at the Earth's surface. Global, annual average TOA and surface radiative forcing by all aerosols are estimated as -0.72 and -4.04 W m(sup -2), respectively. While the predicted highest aerosol cooling and heating at TOA are -10 and +12 W m(sup -2) respectively, surface forcing can reach values as high as -30 W m(sup -2), mainly caused by the absorption by black carbon, mineral dust, and OC. We also estimate the effects of chemistry-aerosol coupling on forcing estimates based on currently available understanding of heterogeneous reactions on aerosols. Through altering the burdens of sulfate, nitrate, and ozone, heterogeneous reactions are predicted to change the global mean TOA forcing of aerosols by 17% and influence global mean TOA forcing of tropospheric ozone by 15%.
The radiation environment on the Moon from galactic cosmic rays in a lunar habitat.
Jia, Y; Lin, Z W
2010-02-01
We calculated how the radiation environment in a habitat on the surface of the Moon would have depended on the thickness of the habitat in the 1977 galactic cosmic-ray environment. The Geant4 Monte Carlo transport code was used, and a hemispherical dome made of lunar regolith was used to simulate the lunar habitat. We investigated the effective dose from primary and secondary particles including nuclei from protons up to nickel, neutrons, charged pions, photons, electrons and positrons. The total effective dose showed a strong decrease with the thickness of the habitat dome. However, the effective dose values from secondary neutrons, charged pions, photons, electrons and positrons all showed a strong increase followed by a gradual decrease with the habitat thickness. The fraction of the summed effective dose from these secondary particles in the total effective dose increased with the habitat thickness, from approximately 5% for the no-habitat case to about 47% for the habitat with an areal thickness of 100 g/cm(2).
NASA Technical Reports Server (NTRS)
Parkinson, C. L.; Herman, G. F.
1980-01-01
The GLAS General Circulation Model (GCM) was applied to the four-month simulation of the thermodynamic part of the Parkinson-Washington sea ice model using atmospheric boundary conditions. The sea ice thickness and distribution were predicted for the Jan. 1-Apr. 30 period using the GCM-fields of solar and infrared radiation, specific humidity and air temperature at the surface, and snow accumulation; the sensible heat and evaporative surface fluxes were consistent with the ground temperatures produced by the ice model and the air temperatures determined by the atmospheric concept. It was concluded that the Parkinson-Washington sea ice model results in acceptable ice concentrations and thicknesses when used with GLAS GCM for the Jan.-Apr. period suggesting the feasibility of fully coupled ice-atmosphere simulations with these two approaches.
Assessment of the global energy budget of Mars and comparison to the Earth
NASA Astrophysics Data System (ADS)
Madeleine, J.; Head, J. W.; Forget, F.; Wolff, M. J.
2012-12-01
The energy balance of a planet depends on its radiative environment and internal energy production. In the case of present-day Mars, the whole climate system is by far controlled by solar radiation rather than internal heat. Over the last hundreds of millions of years, changes in the orbital parameters and insolation pattern have induced various climatic excursions, during which the energy transfers within the atmosphere were different from today. On the longer term, i.e. over the last billions of years, the energy budget was even more different, as a result of the larger geothermal flux and heat provided by volcanic eruptions and impacts. Seeing the climate of Mars from an energy budget perspective provides a framework for understanding the key processes, as well as constraining climate models. The goal of this research is thus to characterize and analyze the energy budget of Mars. The first step, which is described in this communication, consists of quantifying the different components of the Mars radiation budget using the LMD (Laboratoire de Météorologie Dynamique) GCM (Global Climate Model). The LMD/GCM has been developed for more than 20 years and has now reached a level of detail that allows us to quantify the different contributions of CO2 gas, dust and clouds to the radiation budget. The general picture of the radiation budget as simulated by the GCM can be summarized as follows. First of all, the global-mean shortwave (SW) flux incident on the top of the Martian atmosphere is 148.5 W m-2. Whereas most of the incoming solar radiation is absorbed by atmospheric gases on Earth, on Mars most of the sunlight is absorbed by dust particles. Our simulations show that around 15% of the incoming solar radiation is absorbed by dust particles whereas 2.5% is reflected by them. Water-ice clouds also reflect around 1.5% of the solar radiation, which is much smaller than the amount of radiation reflected by clouds on Earth (around 20%). The Martian atmosphere is even more transparent in the long-wave (LW) domain. Only 7% of the infrared radiation emitted by the surface is absorbed by the atmosphere. Most of this absorption (around 4% of the total outgoing infrared radiation) is due to dust particles. Water-ice clouds also play a significant role, and absorb approximately half as much LW radiation as the dust particles. The distribution of energy among the different atmospheric processes (release of latent heat by condensing CO2, atmospheric motions, etc.) can also be analyzed with the GCM and is being further documented. The next steps include analyzing the available observations of the radiation budget, using them to better constrain the GCM, simulating the energy budget during past climatic excursions, and further comparing the fluxes to those of terrestrial glacial regions. The analysis of the integrated SW and LW fluxes has been done using instruments such as TES onboard Mars Global Surveyor, but only in the polar regions. Indeed, measuring the energy budget requires a good spatial and temporal sampling that is better achieved in the polar regions (most Martian satellites have a sun-synchronous polar orbit). Now that GCMs can simulate the SW and LW radiation fields accurately, simulations can be used to fill the temporal gaps in non-polar regions and explore the measurements on a global scale.
Visualization of particle flux in the human body on the surface of Mars
NASA Technical Reports Server (NTRS)
Saganti, Premkumar B.; Cucinotta, Francis A.; Wilson, John W.; Schimmerling, Walter
2002-01-01
For a given galactic cosmic ray (GCR) environment, information on the particle flux of protons, alpha particles, and heavy ions, that varies with respect to the topographical altitude on the Martian surface, are needed for planning exploration missions to Mars. The Mars Global Surveyor (MGS) mission with its Mars Orbiter Laser Altimeter (MOLA) instrument has been providing precise topographical surface map of the Mars. With this topographical data, the particle flux at the Martian surface level through the CO2 atmospheric shielding for solar minimum and solar maximum conditions are calculated. These particle flux calculations are then transported first through an anticipated shielding of a conceptual shelter with several water equivalent shield values (up to 50 g/cm2 of water in steps of 5 g/cm2) considered to represent a surface habitat, and then into the human body. Model calculations are accomplished utilizing the HZETRN, QMSFRG, and SUM-MARS codes. Particle flux calculations for 12 different locations in the human body were considered from skin depth to the internal organs including the blood-forming organs (BFO). Visualization of particle flux in the human body at different altitudes on the Martian surface behind a known shielding is anticipated to provide guidance for assessing radiation environment risk on the Martian surface for future human missions.
Visualization of particle flux in the human body on the surface of Mars.
Saganti, Premkumar B; Cucinotta, Francis A; Wilson, John W; Schimmerling, Walter
2002-12-01
For a given galactic cosmic ray (GCR) environment, information on the particle flux of protons, alpha particles, and heavy ions, that varies with respect to the topographical altitude on the Martian surface, are needed for planning exploration missions to Mars. The Mars Global Surveyor (MGS) mission with its Mars Orbiter Laser Altimeter (MOLA) instrument has been providing precise topographical surface map of the Mars. With this topographical data, the particle flux at the Martian surface level through the CO2 atmospheric shielding for solar minimum and solar maximum conditions are calculated. These particle flux calculations are then transported first through an anticipated shielding of a conceptual shelter with several water equivalent shield values (up to 50 g/cm2 of water in steps of 5 g/cm2) considered to represent a surface habitat, and then into the human body. Model calculations are accomplished utilizing the HZETRN, QMSFRG, and SUM-MARS codes. Particle flux calculations for 12 different locations in the human body were considered from skin depth to the internal organs including the blood-forming organs (BFO). Visualization of particle flux in the human body at different altitudes on the Martian surface behind a known shielding is anticipated to provide guidance for assessing radiation environment risk on the Martian surface for future human missions.
Faster and more accurate transport procedures for HZETRN
NASA Astrophysics Data System (ADS)
Slaba, T. C.; Blattnig, S. R.; Badavi, F. F.
2010-12-01
The deterministic transport code HZETRN was developed for research scientists and design engineers studying the effects of space radiation on astronauts and instrumentation protected by various shielding materials and structures. In this work, several aspects of code verification are examined. First, a detailed derivation of the light particle ( A ⩽ 4) and heavy ion ( A > 4) numerical marching algorithms used in HZETRN is given. References are given for components of the derivation that already exist in the literature, and discussions are given for details that may have been absent in the past. The present paper provides a complete description of the numerical methods currently used in the code and is identified as a key component of the verification process. Next, a new numerical method for light particle transport is presented, and improvements to the heavy ion transport algorithm are discussed. A summary of round-off error is also given, and the impact of this error on previously predicted exposure quantities is shown. Finally, a coupled convergence study is conducted by refining the discretization parameters (step-size and energy grid-size). From this study, it is shown that past efforts in quantifying the numerical error in HZETRN were hindered by single precision calculations and computational resources. It is determined that almost all of the discretization error in HZETRN is caused by the use of discretization parameters that violate a numerical convergence criterion related to charged target fragments below 50 AMeV. Total discretization errors are given for the old and new algorithms to 100 g/cm 2 in aluminum and water, and the improved accuracy of the new numerical methods is demonstrated. Run time comparisons between the old and new algorithms are given for one, two, and three layer slabs of 100 g/cm 2 of aluminum, polyethylene, and water. The new algorithms are found to be almost 100 times faster for solar particle event simulations and almost 10 times faster for galactic cosmic ray simulations.
Investigation of Lithium Metal Hydride Materials for Mitigation of Deep Space Radiation
NASA Technical Reports Server (NTRS)
Rojdev, Kristina; Atwell, William
2016-01-01
Radiation exposure to crew, electronics, and non-metallic materials is one of many concerns with long-term, deep space travel. Mitigating this exposure is approached via a multi-faceted methodology focusing on multi-functional materials, vehicle configuration, and operational or mission constraints. In this set of research, we are focusing on new multi-functional materials that may have advantages over traditional shielding materials, such as polyethylene. Metal hydride materials are of particular interest for deep space radiation shielding due to their ability to store hydrogen, a low-Z material known to be an excellent radiation mitigator and a potential fuel source. We have previously investigated 41 different metal hydrides for their radiation mitigation potential. Of these metal hydrides, we found a set of lithium hydrides to be of particular interest due to their excellent shielding of galactic cosmic radiation. Given these results, we will continue our investigation of lithium hydrides by expanding our data set to include dose equivalent and to further understand why these materials outperformed polyethylene in a heavy ion environment. For this study, we used HZETRN 2010, a one-dimensional transport code developed by NASA Langley Research Center, to simulate radiation transport through the lithium hydrides. We focused on the 1977 solar minimum Galactic Cosmic Radiation environment and thicknesses of 1, 5, 10, 20, 30, 50, and 100 g/cm2 to stay consistent with our previous studies. The details of this work and the subsequent results will be discussed in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smalyuk, V. A.; Atherton, L. J.; Benedetti, L. R.
The radiation-driven, low-adiabat, cryogenic DT layered plastic capsule implosions were carried out on the National Ignition Facility (NIF) to study the sensitivity of performance to peak power and drive duration. An implosion with extended drive and at reduced peak power of 350 TW achieved the highest compression with fuel areal density of ~1.3±0.1 g/cm 2, representing a significant step from previously measured ~1.0 g/cm 2 toward a goal of 1.5 g/cm 2. Moreover, for future experiments will focus on understanding and mitigating hydrodynamic instabilities and mix, and improving symmetry required to reach the threshold for thermonuclear ignition on NIF.
Smalyuk, V. A.; Atherton, L. J.; Benedetti, L. R.; ...
2013-10-19
The radiation-driven, low-adiabat, cryogenic DT layered plastic capsule implosions were carried out on the National Ignition Facility (NIF) to study the sensitivity of performance to peak power and drive duration. An implosion with extended drive and at reduced peak power of 350 TW achieved the highest compression with fuel areal density of ~1.3±0.1 g/cm 2, representing a significant step from previously measured ~1.0 g/cm 2 toward a goal of 1.5 g/cm 2. Moreover, for future experiments will focus on understanding and mitigating hydrodynamic instabilities and mix, and improving symmetry required to reach the threshold for thermonuclear ignition on NIF.
First Global Estimates of Anthropogenic Shortwave Forcing by Methane
NASA Astrophysics Data System (ADS)
Collins, William; Feldman, Daniel; Kuo, Chaincy
2017-04-01
Although the primary well-mixed greenhouse gases (WMGHGs) absorb both shortwave and longwave radiation, to date assessments of the effects from human-induced increases in atmospheric concentrations of WMGHGs have focused almost exclusively on quantifying the longwave radiative forcing of these gases. However, earlier studies have shown that the shortwave effects of WMGHGs are comparable to many less important longwave forcing agents routinely in these assessments, for example the effects of aircraft contrails, stratospheric anthropogenic methane, and stratospheric water vapor from the oxidation of this methane. These earlier studies include the Radiative Transfer Model Intercomparison Project (RTMIP; Collins et al. 2006) conducted using line-by-line radiative transfer codes as well as the radiative parameterizations from most of the global climate models (GCMs) assembled for the Coupled Model Intercomparison Project (CMIP-3). In this talk, we discuss the first global estimates of the shortwave radiative forcing by methane due to the anthropogenic increase in CH4 between pre-industrial and present-day conditions. This forcing is a balance between reduced heating due to absorption of downwelling sunlight in the stratosphere and increased heating due to absorption of upwelling sunlight reflected from the surface as well clouds and aerosols in the troposphere. These estimates are produced using the Observing System Simulation Experiment (OSSE) framework we have developed for NASA's upcoming Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission. The OSSE is designed to compute the monthly mean shortwave radiative forcing based upon global gridded atmospheric and surface conditions extracted from either the meteorological reanalyses collected for the Analysis for MIPs (Ana4MIPs) or the CMIP-5 multi-GCM archive analyzed in the Fifth Assessment Report (AR-5) of the Intergovernmental Panel on Climate Change (IPCC). The OSSE combines these atmospheric conditions with an observationally derived prescription for the Earth's spectral surface albedo as inputs to the MODerate resolution atmospheric TRANsmission (MODTRAN) code. MODTRAN is designed to model atmospheric propagation of electromagnetic radiation for the 100-50,000 1/cm (0.2 to 100 micrometers) spectral range. This covers the spectrum from middle ultraviolet to visible light to far infrared. The most recently released version of the code, MODTRAN6, provides a spectral resolution of 0.2 1/cm using its 0.1 1/cm band model algorithm.
A Multi-scale Modeling System: Developments, Applications and Critical Issues
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Chern, Jiundar; Atlas, Robert; Randall, David; Lin, Xin; Khairoutdinov, Marat; Li, Jui-Lin; Waliser, Duane E.; Hou, Arthur; Peters-Lidard, Christa;
2006-01-01
A multi-scale modeling framework (MMF), which replaces the conventional cloud parameterizations with a cloud-resolving model (CRM) in each grid column of a GCM, constitutes a new and promising approach. The MMF can provide for global coverage and two-way interactions between the CRMs and their parent GCM. The GCM allows global coverage and the CRM allows explicit simulation of cloud processes and their interactions with radiation and surface processes. A new MMF has been developed that is based the Goddard finite volume GCM (fvGCM) and the Goddard Cumulus Ensemble (GCE) model. This Goddard MMF produces many features that are similar to another MMF that was developed at Colorado State University (CSU), such as an improved .surface precipitation pattern, better cloudiness, improved diurnal variability over both oceans and continents, and a stronger, propagating Madden-Julian oscillation (MJO) compared to their parent GCMs using conventional cloud parameterizations. Both MMFs also produce a precipitation bias in the western Pacific during Northern Hemisphere summer. However, there are also notable differences between two MMFs. For example, the CSU MMF simulates less rainfall over land than its parent GCM. This is why the CSU MMF simulated less overall global rainfall than its parent GCM. The Goddard MMF overestimates global rainfall because of its oceanic component. Some critical issues associated with the Goddard MMF are presented in this paper.
NASA Technical Reports Server (NTRS)
Chertock, Beth; Sud, Y. C.
1993-01-01
A global, 7-year satellite-based record of ocean surface solar irradiance (SSI) is used to assess the realism of ocean SSI simulated by the nine-layer Goddard Laboratory for Atmospheres (GLA) General Circulation Model (GCM). January and July climatologies of net SSI produced by the model are compared with corresponding satellite climatologies for the world oceans between 54 deg N and 54 deg S. This comparison of climatologies indicates areas of strengths and weaknesses in the GCM treatment of cloud-radiation interactions, the major source of model uncertainty. Realism of ocean SSI is also important for applications such as incorporating the GLA GCM into a coupled ocean-atmosphere GCM. The results show that the GLA GCM simulates too much SSI in the extratropics and too little in the tropics, especially in the summer hemisphere. These discrepancies reach magnitudes of 60 W/sq m and more. The discrepancies are particularly large in the July case off the western coast of North America. Positive and negative discrepancies in SSI are shown to be consistent with discrepancies in planetary albedo.
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Lee, Dongmin; Norris, Peter; Yuan, Tianle
2011-01-01
It has been shown that the details of how cloud fraction overlap is treated in GCMs has substantial impact on shortwave and longwave fluxes. Because cloud condensate is also horizontally heterogeneous at GCM grid scales, another aspect of cloud overlap should in principle also be assessed, namely the vertical overlap of hydrometeor distributions. This type of overlap is usually examined in terms of rank correlations, i.e., linear correlations between hydrometeor amount ranks of the overlapping parts of cloud layers at specific separation distances. The cloud fraction overlap parameter and the rank correlation of hydrometeor amounts can be both expressed as inverse exponential functions of separation distance characterized by their respective decorrelation lengths (e-folding distances). Larger decorrelation lengths mean that hydrometeor fractions and probability distribution functions have high levels of vertical alignment. An analysis of CloudSat and CALIPSO data reveals that the two aspects of cloud overlap are related and their respective decorrelation lengths have a distinct dependence on latitude that can be parameterized and included in a GCM. In our presentation we will contrast the Cloud Radiative Effect (CRE) of the GEOS-5 atmospheric GCM (AGCM) when the observationally-based parameterization of decorrelation lengths is used to represent overlap versus the simpler cases of maximum-random overlap and globally constant decorrelation lengths. The effects of specific overlap representations will be examined for both diagnostic and interactive radiation runs in GEOS-5 and comparisons will be made with observed CREs from CERES and CloudSat (2B-FLXHR product). Since the radiative effects of overlap depend on the cloud property distributions of the AGCM, the availability of two different cloud schemes in GEOS-5 will give us the opportunity to assess a wide range of potential cloud overlap consequences on the model's climate.
NASA Astrophysics Data System (ADS)
Stanfield, Ryan Evan
Past, current, and future climates have been simulated by the National Aeronautics and Space Administration (NASA) Goddard Institute for Space Studies (GISS) ModelE Global Circulation Model (GCM) and summarized by the Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC, AR4). New simulations from the updated CMIP5 version of the NASA GISS ModelE GCM were recently released to the public community during the summer of 2011 and will be included in the upcoming IPCC AR5 ensemble of simulations. Due to the recent nature of these simulations, they have not yet been extensively validated against observations. To assess the NASA GISS-E2-R GCM, model simulated clouds and cloud properties are compared to observational cloud properties derived from the Clouds and Earth's Radiant Energy System (CERES) project using MODerate Resolution Imaging Spectroradiometer (MODIS) data for the period of March 2000 through December 2005. Over the 6-year period, the global average modeled cloud fractions are within 1% of observations. However, further study however shows large regional biases between the GCM simulations and CERES-MODIS observations. The southern mid-latitudes (SML) were chosen as a focus region due to model errors across multiple GCMs within the recent phase 5 of the Coupled Model Intercomparison Project (CMIP5). Over the SML, the GISS GCM undersimulates total cloud fraction over 20%, but oversimulates total water path by 2 g m-2. Simulated vertical cloud distributions over the SML when compared to both CERES-MODIS and CloudSat/CALIPSO observations show a drastic undersimulation of low level clouds by the GISS GCM, but higher fractions of thicker clouds. To assess the impact of GISS simulated clouds on the TOA radiation budgets, the modeled TOA radiation budgets are compared to CERES EBAF observations. Because modeled low-level cloud fraction is much lower than observed over the SML, modeled reflected shortwave (SW) flux at the TOA is 13 W m -2 lower and outgoing longwave radiation (OLR) is 3 W m-2 higher than observations. Finally, cloud radiative effects (CRE) are calculated and compared with observations to fully assess the impact of clouds on the TOA radiation budgets. The difference in clear-sky reflected SW flux between model and observation is only +4 W m-2 while the SW CRE difference is up to 17 W m-2, indicating that most of the bias in SW CRE results from the all-sky bias between the model and observation. A sizeable negative bias of 10 W m-2 in simulated clear-sky OLR has been found due to a dry bias in calculating observed clear-sky OLR and lack of upper-level water vapor at the 100-mb level in the model. The dry bias impacts CRE LW, with the model undersimulating by 13 W m-2. The CRE NET difference is only 5 W m-2 due to the cancellation of SW and LW CRE biases.
NASA Technical Reports Server (NTRS)
Fast, Kelly E.; Kostiuk, T.; Annen, J.; Hewagama, T.; Delgado, J.; Livengood, T. A.; Lefevre, F.
2008-01-01
We present the application of infrared heterodyne line shapes of ozone on Mars to those produced by radiative transfer modeling of ozone profiles predicted by general circulation models (GCM), and to contemporaneous column abundances measured by Mars Express SPICAM. Ozone is an important tracer of photochemistry Mars' atmosphere, serving as an observable with which to test predictions of photochemistry-coupled GCMs. Infrared heterodyne spectroscopy at 9.5 microns with spectral resolving power >1,000,000 is the only technique that can directly measure fully-resolved line shapes of Martian ozone features from the surface of the Earth. Measurements were made with Goddard Space Flight Center's Heterodyne instrument for Planetary Wind And Composition (HIPWAC) at the NASA Infrared Telescope Facility (IRTF) on Mauna Kea, Hawaii on February 21-24 2008 UT at Ls=35deg on or near the MEX orbital path. The HIPWAC observations were used to test GCM predictions. For example, a GCM-generated ozone profile for 60degN 112degW was scaled so that a radiative transfer calculation of its absorption line shape matched an observed HIPWAC absorption feature at the same areographic position, local time, and season. The RMS deviation of the model from the data was slightly smaller for the GCM-generated profile than for a line shape produced by a constant-with-height profile, even though the total column abundances were the same, showing potential for testing and constraining GCM ozone-profiles. The resulting ozone column abundance from matching the model to the HIPWAC line shape was 60% higher than that observed by SPICAM at the same areographic position one day earlier and 2.5 hours earlier in local time. This could be due to day-to-day, diurnal, or north polar region variability, or to measurement sensitivity to the ozone column and its distribution, and these possibilities will be explored. This work was supported by NASA's Planetary Astronomy Program.
Land-Atmosphere Coupling in the Multi-Scale Modelling Framework
NASA Astrophysics Data System (ADS)
Kraus, P. M.; Denning, S.
2015-12-01
The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced conceptual gap between model resolution and parameterized processes.
Aerosol Particle Shape and Radiative Coupling in a Three Dimensional Titan GCM
NASA Astrophysics Data System (ADS)
Larson, Erik J.; Toon, O. B.; Friedson, A. J.; West, R. A.
2010-10-01
Understanding the aerosols on Titan is imperative for understanding the atmosphere as a whole. The aerosols affect the albedo, optical depth, as well as heating and cooling rates which in turn affect the circulation on Titan leading to feedback with the aerosol distribution. Correctly representing the aerosols in atmospheric models is crucial to understanding this atmosphere. Friedson et al. (2009, A global climate model of Titan's atmosphere and surface. Planet. SpaceSci. 57, 1931-1949.) produced a three-dimensional model for Titan using the NCAR CAM3 model, to which we coupled the aerosol microphysics model CARMA. We have also made the aerosols produced by CARMA interactive with the radiation code in CAM. We compare simulations with radiatively interactive aerosols with those using a prescribed aerosol radiative effect. Preliminary results show that this model is capable of reproducing the seasonal changes in aerosols on Titan and many of the associated phenomena. For instance, the radiatively interactive aerosols are lofted by winds more in the summer hemisphere than the non-radiatively interactive aerosols, which is necessary to reproduce the observed seasonal cycle of the albedo. We compare simulations using spherical particles to simulations using fractal aggregate particles, which are expected from laboratory and observational data. Fractal particles have higher absorption in the UV, slower fall velocities and faster coagulation rates than equivalent mass spherical particles. We compare model simulations with observational data from the Cassini and Huygens missions.
Improving NASA's Multiscale Modeling Framework for Tropical Cyclone Climate Study
NASA Technical Reports Server (NTRS)
Shen, Bo-Wen; Nelson, Bron; Cheung, Samson; Tao, Wei-Kuo
2013-01-01
One of the current challenges in tropical cyclone (TC) research is how to improve our understanding of TC interannual variability and the impact of climate change on TCs. Recent advances in global modeling, visualization, and supercomputing technologies at NASA show potential for such studies. In this article, the authors discuss recent scalability improvement to the multiscale modeling framework (MMF) that makes it feasible to perform long-term TC-resolving simulations. The MMF consists of the finite-volume general circulation model (fvGCM), supplemented by a copy of the Goddard cumulus ensemble model (GCE) at each of the fvGCM grid points, giving 13,104 GCE copies. The original fvGCM implementation has a 1D data decomposition; the revised MMF implementation retains the 1D decomposition for most of the code, but uses a 2D decomposition for the massive copies of GCEs. Because the vast majority of computation time in the MMF is spent computing the GCEs, this approach can achieve excellent speedup without incurring the cost of modifying the entire code. Intelligent process mapping allows differing numbers of processes to be assigned to each domain for load balancing. The revised parallel implementation shows highly promising scalability, obtaining a nearly 80-fold speedup by increasing the number of cores from 30 to 3,335.
Surface Variability of Short-wavelength Radiation and Temperature on Exoplanets around M Dwarfs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xin; Tian, Feng; Wang, Yuwei
2017-03-10
It is a common practice to use 3D General Circulation Models (GCM) with spatial resolution of a few hundred kilometers to simulate the climate of Earth-like exoplanets. The enhanced albedo effect of clouds is especially important for exoplanets in the habitable zones around M dwarfs that likely have fixed substellar regions and substantial cloud coverage. Here, we carry out mesoscale model simulations with 3 km spatial resolution driven by the initial and boundary conditions in a 3D GCM and find that it could significantly underestimate the spatial variability of both the incident short-wavelength radiation and the temperature at planet surface.more » Our findings suggest that mesoscale models with cloud-resolving capability be considered for future studies of exoplanet climate.« less
Cho, Kang Hee
2015-01-01
Objective To investigate intrinsic viscoelastic changes using shear wave velocities (SWVs) of spastic lower extremity muscles in patients with early spinal cord injury (SCI) via acoustic radiation force impulse (ARFI) imaging and to evaluate correlation between the SWV values and spasticity. Methods Eighteen patients with SCI within 3 months and 10 healthy adults participated. We applied the ARFI technique to measure SWV of gastrocnemius muscle (GCM) and long head of biceps femoris muscle. Spasticity of ankle and knee joint was assessed by original Ashworth Scale. Results Ten patients with SCI had spasticity. Patients with spasticity had significantly faster SWV for GCM and biceps femoris muscle than those without spasticity (Mann-Whitney U test, p=0.007 and p=0.008) and normal control (p=0.011 and p=0.037, respectively). The SWV values of GCM correlated with the ankle spasticity (Spearman rank teat, p=0.026). There was significant correlation between the SWV values for long head of biceps femoris muscle and knee spasticity (Spearman rank teat, p=0.022). Conclusion ARFI demonstrated a difference in muscle stiffness in the GCM between patients with spastic SCI and those without spasticity. This finding suggested that stiffness of muscles increased in spastic lower extremity of early SCI patients. ARFI imaging is a valuable tool for noninvasive assessment of the stiffness of the spastic muscle and has the potential to identify pathomechanical changes of the tissue associated with SCI. PMID:26161345
NASA Astrophysics Data System (ADS)
Goren, Tom; Muelmenstaedt, Johannes; Rosenfeld, Daniel; Quaas, Johannes
2017-04-01
Marine stratocumulus clouds (MSC) occur in two main cloud regimes of open and closed cells that differ significantly by their cloud cover. Closed cells gradually get cleansed of high CCN concentrations in a process that involves initiation of drizzle that breaks the full cloud cover into open cells. The drizzle creates downdrafts that organize the convection along converging gust fronts, which in turn produce stronger updrafts that can sustain more cloud water that compensates the depletion of the cloud water by the rain. In addition, having stronger updrafts allow the clouds to grow relatively deep before rain starts to deplete its cloud water. Therefore, lower droplet concentrations and stronger rain would lead to lower cloud fraction, but not necessary also to lower liquid water path (LWP). The fundamental relationships between these key variables derived from global climate model (GCM) simulations are analyzed with respect to observations in order to determine whether the GCM parameterizations can represent well the governing physical mechanisms upon MSC regime transitions. The results are used to evaluate the feasibility of GCM's for estimating aerosol cloud-mediated radiative forcing upon MSC regime transitions, which are responsible for the largest aerosol cloud-mediated radiative forcing.
Influence of Ice Particle Surface Roughening on the Global Cloud Radiative Effect
NASA Technical Reports Server (NTRS)
Yi, Bingqi; Yang, Ping; Baum, Bryan A.; LEcuyer, Tristan; Oreopoulos, Lazaros; Mlawer, Eli J.; Heymsfield, Andrew J.; Liou, Kuo-Nan
2013-01-01
Ice clouds influence the climate system by changing the radiation budget and large-scale circulation. Therefore, climate models need to have an accurate representation of ice clouds and their radiative effects. In this paper, new broadband parameterizations for ice cloud bulk scattering properties are developed for severely roughened ice particles. The parameterizations are based on a general habit mixture that includes nine habits (droxtals, hollow/solid columns, plates, solid/hollow bullet rosettes, aggregate of solid columns, and small/large aggregates of plates). The scattering properties for these individual habits incorporate recent advances in light-scattering computations. The influence of ice particle surface roughness on the ice cloud radiative effect is determined through simulations with the Fu-Liou and the GCM version of the Rapid Radiative Transfer Model (RRTMG) codes and the National Center for Atmospheric Research Community Atmosphere Model (CAM, version 5.1). The differences in shortwave (SW) and longwave (LW) radiative effect at both the top of the atmosphere and the surface are determined for smooth and severely roughened ice particles. While the influence of particle roughening on the single-scattering properties is negligible in the LW, the results indicate that ice crystal roughness can change the SW forcing locally by more than 10 W m(exp -2) over a range of effective diameters. The global-averaged SW cloud radiative effect due to ice particle surface roughness is estimated to be roughly 1-2 W m(exp -2). The CAM results indicate that ice particle roughening can result in a large regional SW radiative effect and a small but nonnegligible increase in the global LW cloud radiative effect.
Evidence for Limited Indirect Aerosol Forcing in Stratocumulus
NASA Technical Reports Server (NTRS)
Ackerman, Andrew S.; Toon, O. B.; Stevens, D. E.
2003-01-01
Increases in cloud cover and condensed water contribute more than half of the indirect aerosol effect in an ensemble of general circulation model (GCM) simulations estimating the global radiative forcing of anthropogenic aerosols. We use detailed simulations of marine stratocumulus clouds and airborne observations of ship tracks to show that increases in cloud cover and condensed water in reality are far less than represented by the GCM ensemble. Our results offer an explanation for recent simplified inverse climate calculations indicating that indirect aerosol effects are greatly exaggerated in GCMs.
NASA Astrophysics Data System (ADS)
Vignon, Etienne; Hourdin, Frédéric; Genthon, Christophe; Madeleine, Jean-Baptiste; Cheruy, Frédérique; Gallée, Hubert; Bazile, Eric; Lefebvre, Marie-Pierre; Van de Wiel, Bas J. H.
2017-04-01
In a General Circulation Model (GCM), the turbulent mixing parametrization of the atmospheric boundary layer (ABL) over the Antarctic Plateau is critical since it affects the continental scale temperature inversion, the katabatic winds and finally the Southern Hemisphere circulation. The aim of this study is to evaluate the representation of the Antarctic Plateau ABL in the Laboratoire de Météorologie Dynamique-Zoom (LMDZ) GCM, the atmospheric component of the IPSL Earth System Model in preparation for the sixth Coupled Models Intercomparison Project. We carry out 1D simulations on the fourth Gewex Atmospheric Boundary Layers Study (GABLS4) case, and 3D simulations with the 'zooming capability' of the horizontal grid and with nudging. Simulations are evaluated and validated using in-situ measurements obtained at Dome C, East Antarctic Plateau, and satellite data. Sensitivity tests to surface parameters, vertical grid and turbulent mixing parametrizations led to significant improvements of the model and to a new configuration better adapted for Antarctic conditions. In particular, we point out the need to remove minimum turbulence thresholds to correctly reproduce very steep temperature and wind speed gradients in the stable ABL. We then assess the ability of the GCM to represent the two distinct stable ABL regimes and very strong near-surface temperature inversions, which are fascinating and critical features of the Dome C climate. This leads us to investigate the competition between radiative and turbulent coupling between the ABL and the snow surface in the model. Our results show that the new configuration of LMDZ reproduces reasonnably well the Dome C climatology and it is able to model strong temperature inversions and radiatively-dominated ABL. However, they also reveal a strong sensitivity of the modeling of the different regimes to the radiative scheme and vertical resolution. The present work finally hints at future developments to better and more physically represent the polar ABL in a GCM.
Updates on Modeling the Water Cycle with the NASA Ames Mars Global Climate Model
NASA Technical Reports Server (NTRS)
Kahre, M. A.; Haberle, R. M.; Hollingsworth, J. L.; Montmessin, F.; Brecht, A. S.; Urata, R.; Klassen, D. R.; Wolff, M. J.
2017-01-01
Global Circulation Models (GCMs) have made steady progress in simulating the current Mars water cycle. It is now widely recognized that clouds are a critical component that can significantly affect the nature of the simulated water cycle. Two processes in particular are key to implementing clouds in a GCM: the microphysical processes of formation and dissipation, and their radiative effects on heating/ cooling rates. Together, these processes alter the thermal structure, change the dynamics, and regulate inter-hemispheric transport. We have made considerable progress representing these processes in the NASA Ames GCM, particularly in the presence of radiatively active water ice clouds. We present the current state of our group's water cycle modeling efforts, show results from selected simulations, highlight some of the issues, and discuss avenues for further investigation.
Constraints on Cumulus Parameterization from Simulations of Observed MJO Events
NASA Technical Reports Server (NTRS)
Del Genio, Anthony; Wu, Jingbo; Wolf, Audrey B.; Chen, Yonghua; Yao, Mao-Sung; Kim, Daehyun
2015-01-01
Two recent activities offer an opportunity to test general circulation model (GCM) convection and its interaction with large-scale dynamics for observed Madden-Julian oscillation (MJO) events. This study evaluates the sensitivity of the Goddard Institute for Space Studies (GISS) GCM to entrainment, rain evaporation, downdrafts, and cold pools. Single Column Model versions that restrict weakly entraining convection produce the most realistic dependence of convection depth on column water vapor (CWV) during the Atmospheric Radiation Measurement MJO Investigation Experiment at Gan Island. Differences among models are primarily at intermediate CWV where the transition from shallow to deeper convection occurs. GCM 20-day hindcasts during the Year of Tropical Convection that best capture the shallow–deep transition also produce strong MJOs, with significant predictability compared to Tropical Rainfall Measuring Mission data. The dry anomaly east of the disturbance on hindcast day 1 is a good predictor of MJO onset and evolution. Initial CWV there is near the shallow–deep transition point, implicating premature onset of deep convection as a predictor of a poor MJO simulation. Convection weakly moistens the dry region in good MJO simulations in the first week; weakening of large-scale subsidence over this time may also affect MJO onset. Longwave radiation anomalies are weakest in the worst model version, consistent with previous analyses of cloud/moisture greenhouse enhancement as the primary MJO energy source. The authors’ results suggest that both cloud-/moisture-radiative interactions and convection–moisture sensitivity are required to produce a successful MJO simulation.
NASA Technical Reports Server (NTRS)
Boville, B. A.; Kiehl, J. T.; Briegleb, B. P.
1988-01-01
The possible effect of the Antartic ozone hole on the evolution of the polar vortex during late winter and spring using a general circulation model (GCM) is examined. The GCM is a version of the NCAR Community Climate Model whose domain extends from the surface to the mesosphere and is similar to that described on Boville and Randel (1986). Ozone is not a predicted variable in the model. A zonally averaged ozone distribution is specified as a function of latitude, pressure and month for the radiation parameterization. Rather that explicitly address reasons for the formation of the ozone hole, researchers postulate its existence and ask what effect it has on the subsequent evolution of the vortex. The evolution of the model when an ozone hole is imposed is then discussed.
Evaluation of regional climate simulations for air quality modelling purposes
NASA Astrophysics Data System (ADS)
Menut, Laurent; Tripathi, Om P.; Colette, Augustin; Vautard, Robert; Flaounas, Emmanouil; Bessagnet, Bertrand
2013-05-01
In order to evaluate the future potential benefits of emission regulation on regional air quality, while taking into account the effects of climate change, off-line air quality projection simulations are driven using weather forcing taken from regional climate models. These regional models are themselves driven by simulations carried out using global climate models (GCM) and economical scenarios. Uncertainties and biases in climate models introduce an additional "climate modeling" source of uncertainty that is to be added to all other types of uncertainties in air quality modeling for policy evaluation. In this article we evaluate the changes in air quality-related weather variables induced by replacing reanalyses-forced by GCM-forced regional climate simulations. As an example we use GCM simulations carried out in the framework of the ERA-interim programme and of the CMIP5 project using the Institut Pierre-Simon Laplace climate model (IPSLcm), driving regional simulations performed in the framework of the EURO-CORDEX programme. In summer, we found compensating deficiencies acting on photochemistry: an overestimation by GCM-driven weather due to a positive bias in short-wave radiation, a negative bias in wind speed, too many stagnant episodes, and a negative temperature bias. In winter, air quality is mostly driven by dispersion, and we could not identify significant differences in either wind or planetary boundary layer height statistics between GCM-driven and reanalyses-driven regional simulations. However, precipitation appears largely overestimated in GCM-driven simulations, which could significantly affect the simulation of aerosol concentrations. The identification of these biases will help interpreting results of future air quality simulations using these data. Despite these, we conclude that the identified differences should not lead to major difficulties in using GCM-driven regional climate simulations for air quality projections.
R-spondin3 is required for mouse placental development.
Aoki, Motoko; Mieda, Michihiro; Ikeda, Toshio; Hamada, Yoshio; Nakamura, Harukazu; Okamoto, Hitoshi
2007-01-01
Mouse R-spondin3 (Rspo3) is a member of the R-spondin protein family, which is characterized by furin-like cysteine-rich domains and a thrombospondin type 1 repeat. Rspo3 has been proposed to function as a secretory molecule that promotes the Wnt/beta-catenin signaling pathway. We generated mice bearing a mutant Rspo3 allele in which a lacZ-coding region replaced the coding region of the first exon. The homozygous mutant mice died at about embryonic day 10, due to impaired formation of the labyrinthine layer of the placenta. Rspo3 was expressed in the allantoic component of the labyrinth. In the homozygous mutant placentas, fetal blood vessels did not penetrate into the chorion, and expression of Gcm1, encoding the transcription factor glial cells missing-1 (Gcm1), was dramatically reduced in the chorionic trophoblast cells. These findings suggest a critical role for Rspo3 in the interaction between chorion and allantois in labyrinthine development.
[Effect of outer space factors on lettuce seeds (Lactuca sativa) flown on "Kosmos" biosatellites].
Nevzgodina, L V; Maksimova, E N; Akatov, Iu A; Kaminskaia, E V; Marennyĭ, A M
1990-01-01
The effect of cosmic radiation on air-dry lettuce (Lactuca sativa) seeds was investigated. It was attempted to discriminate the effects of cosmic ionizing radiation per se and its combination with solar light radiation. It was found that the number of aberrant cells in the seeds exposed to solar light was smaller than that of cells chielded with 0.0008 to 0.0035 g/cm2 foil which could be attributed to photoreactivity.
3D General Circulation Model of the Middle Atmosphere of Jupiter
NASA Astrophysics Data System (ADS)
Zube, Nicholas Gerard; Zhang, Xi; Li, Cheng; Le, Tianhao
2017-10-01
The characteristics of Jupiter’s large-scale stratospheric circulation remain largely unknown. Detailed distributions of temperature and photochemical species have been provided by recent observations [1], but have not yet been accurately reproduced by middle atmosphere general circulation models (GCM). Jupiter’s stratosphere and upper troposphere are influenced by radiative forcing from solar insolation and infrared cooling from hydrogen and hydrocarbons, as well as waves propagating from the underlying troposphere [2]. The relative significance of radiative and mechanical forcing on stratospheric circulation is still being debated [3]. Here we present a 3D GCM of Jupiter’s atmosphere with a correlated-k radiative transfer scheme. The simulation results are compared with observations. We analyze the impact of model parameters on the stratospheric temperature distribution and dynamical features. Finally, we discuss future tracer transport and gravity wave parameterization schemes that may be able to accurately simulate the middle atmosphere dynamics of Jupiter and other giant planets.[1] Kunde et al. 2004, Science 305, 1582.[2] Zhang et al. 2013a, EGU General Assembly, EGU2013-5797-2.[3] Conrath 1990, Icarus, 83, 255-281.
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas
2018-03-01
The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.
CISM: Modeling the Sun-Earth Connection
NASA Astrophysics Data System (ADS)
Hughes, W. J.; Team, T. C.
2003-12-01
The Center for Integrated SpaceWeather Modeling (CISM), an NSF Science and Technology Center that is a consortium of ten institutions headed by Boston University, has as its primary goal the development of a series of ever improving versions of a comprehensive physics-based simulation model that describes the space environment from the Sun to the Earth. CISM will do this by coupling existing models of components of the system. In this paper we review our progress to date and summarize our plans. We discuss results of initial coupling of MHD models of the corona and solar wind, and of a global magnetospheric MHD model with a global ionosphere/thermosphere model, a radiation belt model, and a ring current particle model. Coupling the SAIC coronal MHD model and the U Colorado/SEC solar wind MHD codes allows us to track CMEs from the base of the corona to 1 AU. The results show how shocks form and develop in the heliosphere, and how the CME flattens into a pancake shape by the time it reaches earth. Coupling the Lyon/Fedder/Mobarry global MHD model with the Rice Convection Model and the NCAR TIE-GCM/TING model allows full dynamic coupling between the magnetosphere, the ionosphere/thermosphere, and the hot plasma in the inner magnetosphere. Including the Dartmouth radiation belt model shows how the radiation belts evolve in a realistic magnetosphere.
NASA Astrophysics Data System (ADS)
Storelvmo, Trude; Sagoo, Navjit; Tan, Ivy
2016-04-01
Despite the growing effort in improving the cloud microphysical schemes in GCMs, most of this effort has not focused on improving the ability of GCMs to accurately simulate phase partitioning in mixed-phase clouds. Getting the relative proportion of liquid droplets and ice crystals in clouds right in GCMs is critical for the representation of cloud radiative forcings and cloud-climate feedbacks. Here, we first present satellite observations of cloud phase obtained by NASA's CALIOP instrument, and report on robust statistical relationships between cloud phase and several aerosols species that have been demonstrated to act as ice nuclei (IN) in laboratory studies. We then report on results from model intercomparison projects that reveal that GCMs generally underestimate the amount of supercooled liquid in clouds. For a selected GCM (NCAR 's CAM5), we thereafter show that the underestimate can be attributed to two main factors: i) the presence of IN in the mixed-phase temperature range, and ii) the Wegener-Bergeron-Findeisen process, which converts liquid to ice once ice crystals have formed. Finally, we show that adjusting these two processes such that the GCM's cloud phase is in agreement with the observed has a substantial impact on the simulated radiative forcing due to IN perturbations, as well as on the cloud-climate feedbacks and ultimately climate sensitivity simulated by the GCM.
NASA Technical Reports Server (NTRS)
Randall, David A.; Fowler, Laura D.; Lin, Xin
1998-01-01
In order to improve our understanding of the interactions between clouds, radiation, and the hydrological cycle simulated in the Colorado State University General Circulation Model (CSU GCM), we focused our research on the analysis of the diurnal cycle of precipitation, top-of-the-atmosphere and surface radiation budgets, and cloudiness using 10-year long Atmospheric Model Intercomparison Project (AMIP) simulations. Comparisons the simulated diurnal cycle were made against the diurnal cycle of Earth Radiation Budget Experiment (ERBE) radiation budget and International Satellite Cloud Climatology Project (ISCCP) cloud products. This report summarizes our major findings over the Amazon Basin.
Measurements on radiation shielding efficacy of Polyethylene and Kevlar in the ISS (Columbus)
Di Fino, L.; Larosa, M.; Zaconte, V.; Casolino, M.; Picozza, P.; Narici, L.
2014-01-01
The study and optimization of material effectiveness as radiation shield is a mandatory step toward human space exploration. Passive radiation shielding is one of the most important element in the entire radiation countermeasures package. Crewmembers will never experience direct exposure to space radiation; they will be either inside some shelter (the spacecraft, a ‘base’) or in an EVA (Extra Vehicular Activity) suit. Understanding the radiation shielding features of materials is therefore an important step toward an optimization of shelters and suits construction in the quest for an integrated solution for radiation countermeasures. Materials are usually tested for their radiation shielding effectiveness first with Monte Carlo simulations, then on ground, using particle accelerators and a number of specific ions known to be abundant in space, and finally in space. Highly hydrogenated materials perform best as radiation shields. Polyethylene is right now seen as the material that merges a high level of hydrogenation, an easiness of handling and machining as well as an affordable cost, and it is often referred as a sort of ‘standard’ to which compare other materials' effectiveness. Kevlar has recently shown very interesting radiation shielding properties, and it is also known to have important characteristics toward debris shielding, and can be used, for example, in space suits. We have measured in the ISS the effectiveness of polyethylene and kevlar using three detectors of the ALTEA system [ 1– 3] from 8 June 2012 to 13 November 2012, in Express Rack 3 in Columbus. These active detectors are able to provide the radiation quality parameters in any orbital region; being identical, they are also suitable to be used in parallel (one for the unshielded baseline, two measuring radiation with two different amounts of the same material: 5 and 10 g/cm2). A strong similarity of the shielding behavior between polyethylene and kevlar is documented. We measured shielding providing as much as ∼40% reduction for high Z ions. In Fig. 1, the integrated behavior (3 ≤LET ≤ 350 keV/µm) is shown (ratios with the baseline measurements with no shield) both for polyethylene and kevlar, in flux, dose and dose equivalent. The measured reductions in dose for the 10 g/cm2 shields for high LET (>50 keV/µm, not shown in the figure) are in agreement with what found in accelerator measurements (Fe, 1 GeV) [4]. The thinner shielding (5 g/cm2) in our measurements performs ∼2% better (in unit areal density). Fig. 1.Integrated behavior (3 ≤ LET ≤ 350 keV/μm) of Flux, Dose and Equivalent Dose. The ratios with the baseline measurements with no shield are shown, both for Kevlar and Polyethylene as measured with the two different material thicknesses.
Analysis of TIMED/GUVI Dayglow Utraviolet Oxygen Images
NASA Astrophysics Data System (ADS)
Christensen, A. B.; Crowley, G.; Meier, R.
2016-12-01
Analysis of the atomic oxygen resonance transition at 130.4 nm and the inter-combination transition at 135.6 nm measured by the TIMED/GUVI mission demonstrates the state of knowledge of these important dayglow emission features and the degree to which current models can simulate their global properties. The complete modeling framework comprises several models, including the Thermosphere ionosphere Mesosphere Electrodynamics General Circulation Model (TIME-GCM), Assimilative Mapping of Ionospheric Electrodynamics (AMIE), a partial frequency redistribution resonance scattering model usually called REDISTER needed to compute the optically thick radiative transfer of the 130.4 nm emission, airglow emission models, GLOW and AURIC and other procedures. Observations for four different days, collected under different geophysical conditions of magnetic activity and solar cycle, show very good agreement with the calculated emission brightness and geographic distribution for both emissions. The differences between the airglow codes for the 135.6 nm emission will be discussed in connection to the photoelectron energy loss cross sections, as well as the excitation cross sections used in the various models.
Extending the NASA Ames Mars General Circulation Model to Explore Mars’ Middle Atmosphere
NASA Astrophysics Data System (ADS)
Brecht, Amanda; Hollingsworth, J.; Kahre, M.; Schaeffer, J.
2013-10-01
The NASA Ames Mars General Circulation Model (MGCM) upper boundary has been extended to ~120 km altitude (p ~10-5 mbar). The extension of the MGCM upper boundary initiates the ability to understand the connection between the lower and upper atmosphere of Mars through the middle atmosphere 70 - 120 km). Moreover, it provides the opportunity to support future missions (i.e. the 2013 MAVEN mission). A major factor in this extension is the incorporation of the Non-Local Thermodynamic Equilibrium (NLTE) heating (visible) and cooling (infrared). This modification to the radiative transfer forcing (i.e., RT code) has been significantly tested in a 1D vertical column and now has been ported to the full 3D Mars GCM. Initial results clearly show the effects of NLTE in the upper middle atmosphere. Diagnostic of seasonal mean fields and large-scale wave activity will be shown with insight into circulation patterns in the middle atmosphere. Furthermore, sensitivity tests with the resolution of the pressure and temperature grids, in which the k-coefficients are calculated upon, have been performed in the 1D RT code. Our progress on this research will be presented. Brecht is supported by NASA’s Postdoctoral Program at the Ames Research Center, administered by Oak Ridge Associated Universities through a contract with NASA.
Modelling of Titan's middle atmosphere with the IPSL climate model
NASA Astrophysics Data System (ADS)
Vatant d'Ollone, Jan; Lebonnois, Sébastien; Guerlet, Sandrine
2017-04-01
Titan's 3-dimensional Global Climate Model developed at the Institute Pierre-Simon Laplace has already demonstrated its efficiency to reproduce and interpret many features of the Saturnian moon's climate (e.g. Lebonnois et al., 2012). However, it suffered from limits at the top of the model, with temperatures far warmer than the observations and no stratopause simulated. To interpret Cassini's overall observations of seasonal effects in the middle atmosphere (e.g. Vinatier et al., 2015), a satisfying modelling of the temperature profile in this region was first required. Latest developments in the GCM now enable a correct modelling of the temperature profile in the middle atmosphere. In particular, a new, more flexible, radiative transfer scheme based on correlated-k method has been set up, using up-to-date spectroscopic data. Special emphasis is put on the too warm upper stratospheric temperatures in the former model that were due to the absence of the infrared ν4 methane line (7.7 μm) in the radiative transfer. While it was usually neglected in the tropospheric radiative models, this line has a strong cooling effect in Titan's stratospheric conditions and cannot be neglected. In this new version of the GCM, the microphysical model is temporarily switched off and we use a mean profile for haze opacity (Lavvas et al., 2010). The circulation in the middle atmosphere is significantly improved by this new radiative transfer. The new 3-D simulations also show an interesting feature in the modeled vertical profile of the zonal wind as the minimum in low stratosphere is now closer to the observations. Works in progress such as the vertical extension and the computation of the radiative effect of the seasonal variations of trace components will also be presented. - Lavvas P. et al., 2010. Titan's vertical aerosol structure at the Huygens landing site: Constraints on particle size, density, charge, and refractive index. Icarus 210, 832-842. - Lebonnois S. et al., 2012. Titan Global Climate Model: new 3-dimensional version of the IPSL Titan GCM. Icarus 218, 707-722. - Vinatier S. et al., 2015. Seasonal variations in Titan's middle atmosphere during the northern spring derived from Cassini/CIRS observations. Icarus 250, 95-115.
Observed and Simulated Radiative and Microphysical Properties of Tropical Convective Storms
NASA Technical Reports Server (NTRS)
DelGenio, Anthony D.; Hansen, James E. (Technical Monitor)
2001-01-01
Increases in the ice content, albedo and cloud cover of tropical convective storms in a warmer climate produce a large negative contribution to cloud feedback in the GISS GCM. Unfortunately, the physics of convective upward water transport, detrainment, and ice sedimentation, and the relationship of microphysical to radiative properties, are all quite uncertain. We apply a clustering algorithm to TRMM satellite microwave rainfall retrievals to identify contiguous deep precipitating storms throughout the tropics. Each storm is characterized according to its size, albedo, OLR, rain rate, microphysical structure, and presence/absence of lightning. A similar analysis is applied to ISCCP data during the TOGA/COARE experiment to identify optically thick deep cloud systems and relate them to large-scale environmental conditions just before storm onset. We examine the statistics of these storms to understand the relative climatic roles of small and large storms and the factors that regulate convective storm size and albedo. The results are compared to GISS GCM simulated statistics of tropical convective storms to identify areas of agreement and disagreement.
NASA Technical Reports Server (NTRS)
Sud, Y.; Molod, A.
1988-01-01
The Goddard Laboratory for Atmospheres GCM is used to study the sensitivity of the simulated July circulation to modifications in the parameterization of dry and moist convection, evaporation from falling raindrops, and cloud-radiation interaction. It is shown that the Arakawa-Schubert (1974) cumulus parameterization and a more realistic dry convective mixing calculation yielded a better intertropical convergence zone over North Africa than the previous convection scheme. It is found that the physical mechanism for the improvement was the upward mixing of PBL moisture by vigorous dry convective mixing. A modified rain-evaporation parameterization which accounts for raindrop size distribution, the atmospheric relative humidity, and a typical spatial rainfall intensity distribution for convective rain was developed and implemented. This scheme led to major improvements in the monthly mean vertical profiles of relative humidity and temperature, convective and large-scale cloudiness, rainfall distributions, and mean relative humidity in the PBL.
NASA Astrophysics Data System (ADS)
Bertrand, Tanguy; Forget, Francois
2016-04-01
To interpret New Horizons observations and simulate the Pluto climate system, we have developed a Global Climate Model (GCM) of Pluto's atmosphere. In addition to a 3D "dynamical core" which solves the equation of meteorology, the model takes into account the N2 condensation and sublimation and its thermal and dynamical effects, the vertical turbulent mixing, the radiative transfer through methane and carbon monoxide, molecular thermal conduction, and a detailed surface thermal model with different thermal inertia for various timescales (diurnal, seasonal). The GCM also includes a detailed model of the CH4 and CO cycles, taking into account their transport by the atmospheric circulation and turbulence, as well as their condensation and sublimation on the surface and in the atmosphere, possibly forming methane ice clouds. The GCM consistently predicts the 3D methane abundance in the atmosphere, which is used as an input for our radiative transfer calculation. In a second phase, we also developed a volatile transport model, derived from the GCM, which can be run over thousands of years in order to reach consistent initial states for the GCM runs and better explore the seasonal processes on Pluto. Results obtained with the volatile transport model show that the distribution of N2, CH4 and CO ices primarily depends on the seasonal thermal inertia used for the different ices, and is affected by the assumed topography as well. As observed, it is possible to form a large and permanent nitrogen glacier with CO and CH4 ice deposits in an equatorial basin corresponding to Sputnik Planum, while having a surface pressure evolution consistent with stellar occultations and New Horizons data. In addition, most of the methane ice is sequestered with N2 ice in the basin but seasonal polar caps of CH4 frosts also form explaining the bright polar caps observed with Hubble in the 1980s and in line with New Horizons observations. Using such balanced combination of surface and subsurface conditions as initial conditions, we run the GCM from 1975 to 2015, so that the model become insensitive to the assumed atmospheric initial states (that are not constrained by the volatile transport model). The simulated thermal structure and waves can be compared to the New Horizons occultations measurements. As observed, the horizontal variability is very limited, for fundamental reasons. In addition, we have developed a 3D model of the formation of organic hazes within the GCM. It includes the different steps of aerosols formation as understood on Titan: photolysis of CH4 in the upper atmosphere by the Lyman-alpha radiation, production of various gaseous precursor species, conversion into solid particles through chemistry and aggregation processes, and gravitational sedimentation. Significant amount of haze particles are found to be present at all latitudes up to 100 km. However, if N2 ice is already condensing in the polar night, the majority of the haze particles tend to accumulate in the polar night because of the transport of the haze precursors and aerosols by the condensation flow.
NASA Technical Reports Server (NTRS)
Olejarski, Michael; Appleton, Amy; Deltorchio, Stephen
2009-01-01
The Group Capability Model (GCM) is a software tool that allows an organization, from first line management to senior executive, to monitor and track the health (capability) of various groups in performing their contractual obligations. GCM calculates a Group Capability Index (GCI) by comparing actual head counts, certifications, and/or skills within a group. The model can also be used to simulate the effects of employee usage, training, and attrition on the GCI. A universal tool and common method was required due to the high risk of losing skills necessary to complete the Space Shuttle Program and meet the needs of the Constellation Program. During this transition from one space vehicle to another, the uncertainty among the critical skilled workforce is high and attrition has the potential to be unmanageable. GCM allows managers to establish requirements for their group in the form of head counts, certification requirements, or skills requirements. GCM then calculates a Group Capability Index (GCI), where a score of 1 indicates that the group is at the appropriate level; anything less than 1 indicates a potential for improvement. This shows the health of a group, both currently and over time. GCM accepts as input head count, certification needs, critical needs, competency needs, and competency critical needs. In addition, team members are categorized by years of experience, percentage of contribution, ex-members and their skills, availability, function, and in-work requirements. Outputs are several reports, including actual vs. required head count, actual vs. required certificates, CGI change over time (by month), and more. The program stores historical data for summary and historical reporting, which is done via an Excel spreadsheet that is color-coded to show health statistics at a glance. GCM has provided the Shuttle Ground Processing team with a quantifiable, repeatable approach to assessing and managing the skills in their organization. They now have a common frame of reference across NASA/contractor lines to communicate and mitigate any critical skills concerns.
NASA Astrophysics Data System (ADS)
Khouider, B.; Majda, A.; Deng, Q.; Ravindran, A. M.
2015-12-01
Global climate models (GCMs) are large computer codes based on the discretization of the equations of atmospheric and oceanic motions coupled to various processes of transfer of heat, moisture and other constituents between land, atmosphere, and oceans. Because of computing power limitations, typical GCM grid resolution is on the order of 100 km and the effects of many physical processes, occurring on smaller scales, on the climate system are represented through various closure recipes known as parameterizations. The parameterization of convective motions and many processes associated with cumulus clouds such as the exchange of latent heat and cloud radiative forcing are believed to be behind much of uncertainty in GCMs. Based on a lattice particle interacting system, the stochastic multicloud model (SMCM) provide a novel and efficient representation of the unresolved variability in GCMs due to organized tropical convection and the cloud cover. It is widely recognized that stratiform heating contributes significantly to tropical rainfall and to the dynamics of tropical convective systems by inducing a front-to-rear tilt in the heating profile. Stratiform anvils forming in the wake of deep convection play a central role in the dynamics of tropical mesoscale convective systems. Here, aquaplanet simulations with a warm pool like surface forcing, based on a coarse-resolution GCM , of ˜170 km grid mesh, coupled with SMCM, are used to demonstrate the importance of stratiform heating for the organization of convection on planetary and intraseasonal scales. When some key model parameters are set to produce higher stratiform heating fractions, the model produces low-frequency and planetary-scale Madden Julian oscillation (MJO)-like wave disturbances while lower to moderate stratiform heating fractions yield mainly synoptic-scale convectively coupled Kelvin-like waves. Rooted from the stratiform instability, it is conjectured here that the strength and extent of stratiform downdrafts are key contributors to the scale selection of convective organizations perhaps with mechanisms that are in essence similar to those of mesoscale convective systems.
Sensitivity of Precipitation in Coupled Land-Atmosphere Models
NASA Technical Reports Server (NTRS)
Neelin, David; Zeng, N.; Suarez, M.; Koster, R.
2004-01-01
The project objective was to understand mechanisms by which atmosphere-land-ocean processes impact precipitation in the mean climate and interannual variations, focusing on tropical and subtropical regions. A combination of modeling tools was used: an intermediate complexity land-atmosphere model developed at UCLA known as the QTCM and the NASA Seasonal-to-Interannual Prediction Program general circulation model (NSIPP GCM). The intermediate complexity model was used to develop hypotheses regarding the physical mechanisms and theory for the interplay of large-scale dynamics, convective heating, cloud radiative effects and land surface feedbacks. The theoretical developments were to be confronted with diagnostics from the more complex GCM to validate or modify the theory.
Numerical simulation of the circulation of the atmosphere of Titan
NASA Technical Reports Server (NTRS)
Hourdin, F.; Levan, P.; Talagrand, O.; Courtin, Regis; Gautier, Daniel; Mckay, Christopher P.
1992-01-01
A three dimensional General Circulation Model (GCM) of Titan's atmosphere is described. Initial results obtained with an economical two dimensional (2D) axisymmetric version of the model presented a strong superrotation in the upper stratosphere. Because of this result, a more general numerical study of superrotation was started with a somewhat different version of the GCM. It appears that for a slowly rotating planet which strongly absorbs solar radiation, circulation is dominated by global equator to pole Hadley circulation and strong superrotation. The theoretical study of this superrotation is discussed. It is also shown that 2D simulations systemically lead to instabilities which make 2D models poorly adapted to numerical simulation of Titan's (or Venus) atmosphere.
NASA Technical Reports Server (NTRS)
Shindell, Drew T.; Grenfell, J. Lee; Rind, David; Price, Colin; Grewe, Volker; Hansen, James E. (Technical Monitor)
2001-01-01
A tropospheric chemistry module has been developed for use within the Goddard Institute for Space Studies (GISS) general circulation model (GCM) to study interactions between chemistry and climate change. The model uses a simplified chemistry scheme based on CO-NOx-CH4 chemistry, and also includes a parameterization for emissions of isoprene, the most important non-methane hydrocarbon. The model reproduces present day annual cycles and mean distributions of key trace gases fairly well, based on extensive comparisons with available observations. Examining the simulated change between present day and pre-industrial conditions, we find that the model has a similar response to that seen in other simulations. It shows a 45% increase in the global tropospheric ozone burden, within the 25% - 57% range seen in other studies. Annual average zonal mean ozone increases by more than 125% at Northern Hemisphere middle latitudes near the surface. Comparison of model runs that allow the calculated ozone to interact with the GCM's radiation and meteorology with those that do not shows only minor differences for ozone. The common usage of ozone fields that are not calculated interactively seems to be adequate to simulate both the present day and the pre-industrial ozone distributions. However, use of coupled chemistry does alter the change in tropospheric oxidation capacity, enlarging the overall decrease in OH concentrations from the pre-industrial to the present by about 10% (-5.3% global annual average in uncoupled mode, -5.9% in coupled mode). This indicates that there may be systematic biases in the simulation of the pre-industrial to present day decrease in the oxidation capacity of the troposphere (though a 10% difference is well within the total uncertainty). Global annual average radiative forcing from pre-industrial to present day ozone change is 0.32 W/sq m. The forcing seems to be increased by about 10% when the chemistry is coupled to the GCM. Forcing values greater than 0.8 W/sq m are seen over large areas of the United States, Southern Europe, North Africa, the Middle East, Central Asia, and the Arctic. Radiative forcing is greater than 1.5 W/sq m over parts of these areas during Northern summer Though there are local differences, the radiative forcing is overall in good agreement with the results of other modeling studies in both its magnitude and spatial distribution, demonstrating that the simplified chemistry is adequate for climate studies.
NASA Astrophysics Data System (ADS)
Zhao, Wenjie; Peng, Yiran; Wang, Bin; Yi, Bingqi; Lin, Yanluan; Li, Jiangnan
2018-05-01
A newly implemented Baum-Yang scheme for simulating ice cloud optical properties is compared with existing schemes (Mitchell and Fu schemes) in a standalone radiative transfer model and in the global climate model (GCM) Community Atmospheric Model Version 5 (CAM5). This study systematically analyzes the effect of different ice cloud optical schemes on global radiation and climate by a series of simulations with a simplified standalone radiative transfer model, atmospheric GCM CAM5, and a comprehensive coupled climate model. Results from the standalone radiative model show that Baum-Yang scheme yields generally weaker effects of ice cloud on temperature profiles both in shortwave and longwave spectrum. CAM5 simulations indicate that Baum-Yang scheme in place of Mitchell/Fu scheme tends to cool the upper atmosphere and strengthen the thermodynamic instability in low- and mid-latitudes, which could intensify the Hadley circulation and dehydrate the subtropics. When CAM5 is coupled with a slab ocean model to include simplified air-sea interaction, reduced downward longwave flux to surface in Baum-Yang scheme mitigates ice-albedo feedback in the Arctic as well as water vapor and cloud feedbacks in low- and mid-latitudes, resulting in an overall temperature decrease by 3.0/1.4 °C globally compared with Mitchell/Fu schemes. Radiative effect and climate feedback of the three ice cloud optical schemes documented in this study can be referred for future improvements on ice cloud simulation in CAM5.
NASA Technical Reports Server (NTRS)
Fowler, Laura D.; Wielicki, Bruce A.; Randall, David A.; Branson, Mark D.; Gibson, Gary G.; Denn, Fredrick M.
2000-01-01
Collocated in time and space, top-of-the-atmosphere measurements of the Earth radiation budget (ERB) and cloudiness from passive scanning radiometers, and lidar- and radar-in-space measurements of multilayered cloud systems, are the required combination to improve our understanding of the role of clouds and radiation in climate. Experiments to fly multiple satellites "in formation" to measure simultaneously the radiative and optical properties of overlapping cloud systems are being designed. Because satellites carrying ERB experiments and satellites carrying lidars- or radars-in space have different orbital characteristics, the number of simultaneous measurements of radiation and clouds is reduced relative to the number of measurements made by each satellite independently. Monthly averaged coincident observations of radiation and cloudiness are biased when compared against more frequently sampled observations due, in particular, to the undersampling of their diurnal cycle, Using the Colorado State University General Circulation Model (CSU GCM), the goal of this study is to measure the impact of using simultaneous observations from the Earth Observing System (EOS) platform and companion satellites flying lidars or radars on monthly averaged diagnostics of longwave radiation, cloudiness, and its cloud optical properties. To do so, the hourly varying geographical distributions of coincident locations between the afternoon EOS (EOS-PM) orbit and the orbit of the ICESAT satellite set to fly at the altitude of 600 km, and between the EOS PM orbit and the orbits of the PICASSO satellite proposed to fly at the altitudes of 485 km (PICA485) or 705 km (PICA705), are simulated in the CSU GCM for a 60-month time period starting at the idealistic July 1, 2001, launch date. Monthly averaged diagnostics of the top-of-the-atmosphere, atmospheric, and surface longwave radiation budgets and clouds accumulated over grid boxes corresponding to satellite overpasses are compared against monthly averaged diagnostics obtained from hourly samplings over the entire globe. Results show that differences between irregularly (satellite) and regularly (true) sampled diagnostics of the longwave net radiative budgets are the greatest at the surface and the smallest in the atmosphere and at the top-of-the-atmosphere, under both cloud-free and cloudy conditions. In contrast, differences between the satellite and the true diagnostics of the longwave cloud radiative forcings are the largest in the atmosphere and at the top-of-the-atmosphere, and the smallest at the surface. A poorer diurnal sampling of the surface temperature in the satellite simulations relative to the true simulation contributes a major part to sampling biases in the longwave net radiative budgets, while a poorer diurnal sampling of cloudiness and its optical properties directly affects diagnostics of the longwave cloud radiative forcings. A factor of 8 difference in the number of satellite overpasses between PICA705 and PICA485 and ICESAT leads to a systematic factor of 3 difference in the spatial standard deviations of all radiative and cloudiness diagnostics.
NASA Astrophysics Data System (ADS)
Davis, A. B.; Cahalan, R. F.
2001-05-01
The Intercomparison of 3D Radiation Codes (I3RC) is an on-going initiative involving an international group of over 30 researchers engaged in the numerical modeling of three-dimensional radiative transfer as applied to clouds. Because of their strong variability and extreme opacity, clouds are indeed a major source of uncertainty in the Earth's local radiation budget (at GCM grid scales). Also 3D effects (at satellite pixel scales) invalidate the standard plane-parallel assumption made in the routine of cloud-property remote sensing at NASA and NOAA. Accordingly, the test-cases used in I3RC are based on inputs and outputs which relate to cloud effects in atmospheric heating rates and in real-world remote sensing geometries. The main objectives of I3RC are to (1) enable participants to improve their models, (2) publish results as a community, (3) archive source code, and (4) educate. We will survey the status of I3RC and its plans for the near future with a special emphasis on the mathematical models and computational approaches. We will also describe some of the prime applications of I3RC's efforts in climate models, cloud-resolving models, and remote-sensing observations of clouds, or that of the surface in their presence. In all these application areas, computational efficiency is the main concern and not accuracy. One of I3RC's main goals is to document the performance of as wide a variety as possible of three-dimensional radiative transfer models for a small but representative number of ``cases.'' However, it is dominated by modelers working at the level of linear transport theory (i.e., they solve the radiative transfer equation) and an overwhelming majority of these participants use slow-but-robust Monte Carlo techniques. This means that only a small portion of the efficiency vs. accuracy vs. flexibility domain is currently populated by I3RC participants. To balance this natural clustering the present authors have organized a systematic outreach towards modelers that have used approximate methods in radiation transport. In this context, different, presumably simpler, equations (such as diffusion) are used in order to make a significant gain on the efficiency axis. We will describe in some detail the most promising approaches to approximate 3D radiative transfer in clouds. Somewhat paradoxically, and in spite of its importance in the above-mentioned applications, approximate radiative transfer modeling lags significantly behind its exact counterpart because the required mathematical and computational culture is essentially alien to the native atmospheric radiation community. I3RC is receiving enough funding from NASA/HQ and DOE/ARM for its essential operations out of NASA/GSFC. However, this does not cover the time and effort of any of the participants; so only existing models were entered. At present, none of inherently approximate methods are represented, only severe truncations of some exact methods. We therefore welcome the Math/Geo initiative at NSF which should enable the proper consortia of experts in atmospheric radiation and in applied mathematics to fill an important niche.
NASA Astrophysics Data System (ADS)
Jing, Xianwen; Zhang, Hua; Peng, Jie; Li, Jiangnan; Barker, Howard W.
2016-03-01
Vertical decorrelation length (Lcf) as used to determine overlap of cloudy layers in GCMs was obtained from CloudSat/CALIPSO measurements, made between 2007 and 2010, and analyzed in terms of monthly means. Global distributions of Lcf were produced for several cross-sectional lengths. Results show that: Lcf over the tropical convective regions typically exceeds 2 km and shift meridionally with season; the smallest Lcf (< 1 km) tends to occur in regions dominated by marine stratiform clouds; Lcf for mid-to-high latitude continents of the Northern Hemisphere (NH) ranges from 5-6 km during winter to 2-3 km during summer; and there are marked differences between continental and oceanic values of Lcf in the mid-latitudes of the NH. These monthly-gridded, observationally-based values of Lcf data were then used by the Monte Carlo Independent Column Approximation (McICA) radiation routines within the Beijing Climate Center's GCM (BCC_AGCM2.0.1). Additionally, the GCM was run with two other descriptions of Lcf: one varied with latitude only, and the other was simply 2 km everywhere all the time. It is shown that using the observationally-based Lcf in the GCM led to local and seasonal changes in total cloud fraction and shortwave (longwave) cloud radiative effects that serve mostly to reduce model biases. This indicates that usage of Lcf that vary according to location and time has the potential to improve climate simulations.
Allowing for Horizontally Heterogeneous Clouds and Generalized Overlap in an Atmospheric GCM
NASA Technical Reports Server (NTRS)
Lee, D.; Oreopoulos, L.; Suarez, M.
2011-01-01
While fully accounting for 3D effects in Global Climate Models (GCMs) appears not realistic at the present time for a variety of reasons such as computational cost and unavailability of 3D cloud structure in the models, incorporation in radiation schemes of subgrid cloud variability described by one-point statistics is now considered feasible and is being actively pursued. This development has gained momentum once it was demonstrated that CPU-intensive spectrally explicit Independent Column Approximation (lCA) can be substituted by stochastic Monte Carlo ICA (McICA) calculations where spectral integration is accomplished in a manner that produces relatively benign random noise. The McICA approach has been implemented in Goddard's GEOS-5 atmospheric GCM as part of the implementation of the RRTMG radiation package. GEOS-5 with McICA and RRTMG can handle horizontally variable clouds which can be set via a cloud generator to arbitrarily overlap within the full spectrum of maximum and random both in terms of cloud fraction and layer condensate distributions. In our presentation we will show radiative and other impacts of the combined horizontal and vertical cloud variability on multi-year simulations of an otherwise untuned GEOS-5 with fixed SSTs. Introducing cloud horizontal heterogeneity without changing the mean amounts of condensate reduces reflected solar and increases thermal radiation to space, but disproportionate changes may increase the radiative imbalance at TOA. The net radiation at TOA can be modulated by allowing the parameters of the generalized overlap and heterogeneity scheme to vary, a dependence whose behavior we will discuss. The sensitivity of the cloud radiative forcing to the parameters of cloud horizontal heterogeneity and comparisons of CERES-derived forcing will be shown.
NASA Astrophysics Data System (ADS)
Harrach, Robert J.; Rogers, Forest J.
1981-09-01
Two equation-of-state (EOS) models for multipy ionized matter are evaluated for the case of an aluminum plasma in the temperature range from about one eV to several hundred eV, spanning conditions of weak to strong ionization. Specifically, the simple analytical mode of Zel'dovich and Raizer and the more comprehensive model comprised by Rogers' plasma physics avtivity expansion code (ACTEX) are used to calculate the specific internal energy ɛ and average degree of ionization Z¯*, as functons of temperature T and density ρ. In the absence of experimental data, these results are compared against each other, covering almost five orders-of-magnitude variation in ɛ and the full range of Z¯* We find generally good agreement between the two sets of results, especially for low densities and for temperatures near the upper end of the rage. Calculated values of ɛ(T) agree to within ±30% over nearly the full range in T for densities below about 1 g/cm3. Similarly, the two models predict values of Z¯*(T) which track each other fairly well; above 20 eV the discrepancy is less than ±20% fpr ρ≲1 g/cm3. Where the calculations disagree, we expect the ACTEX code to be more accurate than Zel'dovich and Raizer's model, by virtue of its more detailed physics content.
NASA Astrophysics Data System (ADS)
Spiga, Aymeric; Guerlet, Sandrine; Meurdesoif, Yann; Indurain, Mikel; Millour, Ehouarn; Sylvestre, Melody; Dubos, Thomas; Fouchet, Thierry
2016-10-01
A mission as richly instrumented as Cassini has brought a new impulse to the studies of Saturn's atmospheric fluid dynamics, to be further extended to Jupiter by the Juno mission.We recently built an innovative Global Climate Model (GCM) for giant planets by coupling our complete seasonal radiative model [Guerlet Icarus 2014] with a new hydrodynamical solver using an original icosahedral mapping of the planetary sphere to ensure excellent conservation and scalability properties in massively parallel computing resources [Dubos GMD 2015].Here we describe the insights gained from GCM simulations for Saturn with both unprecedented horizontal resolutions (reference at 1/2° latitude/longitude, and tests at 1/4° and 1/8°), integrated time (up to ten simulated Saturn years), and large vertical extent (from the troposphere to the stratosphere).Starting from a windless initial state, our 10-year-long GCM simulation for Saturn reproduce alterned tropospheric mid-latitude jets bearing similarities with the observed jet system (numbering, intensity, width). We demonstrate that those jets are eddy-driven with a conversion rate from eddies to mean flow in agreement with Cassini estimates. Before reaching equilibrium, mid-latitude jets experience poleward migration, which can be ascribed to a self-destabilization of the jets by barotropic and baroclinic instabilities.Our Saturn GCM also predicts in the equator the presence of eastward-propagating Rossby-gravity (Yanai) and westward-propagating Rossby waves, reminiscent of similar waves in the terrestrial tropics. Furthermore, our GCM simulations exhibit a stratospheric meridional circulation from one tropic to the other, with a seasonal reversal, which allows us to investigate the possible dynamical control on the observed variations of hydrocarbon species.In contrast to observations, in our GCM simulations the equatorial jet is only weakly super-rotating and the polar jet is strongly destabilized by meandering. Moreover, in spite of predicting stacked stratospheric eastward and westward jets, our GCM does not reproduce the observed propagation of the equatorial oscillation by Cassini. We will discuss how to address those remaining challenges in future simulations.
Research on thermal conductivity of HGMs at vacuum in room temperature
NASA Astrophysics Data System (ADS)
Wang, Ping; Liao, Bin; An, Zhenguo; Yan, Kaiqi; Zhang, Jingjie
2018-05-01
Hollow glass microspheres (HGMs) can be used as thermal insulation materials owing to its hollow structure which brings excellent thermal insulation property and low density. At present, most researches on thermal conductivity of HGMs are focused on polymer matrix/HGMs composite materials. However, thermal conductivity of HGMs at vacuum in room temperature has rarely been investigated. In this work, thermal conductivity of six types of HGMs (T17 (0.17g/cm3), T20 (0.20g/cm3), T22 (0.22g/cm3), T25 (0.25g/cm3), T32 (0.32g/cm3) and T40 (0.40g/cm3)) at vacuum in room temperature were calculated by heat transfer of solid conduction and radiation. The calculation results showed that thermal conductivity of HGMs would be decreased by an order of magnitude compared with no vacuum. In order to verify the calculation and study vacuum thermal insulation properties of HGMs, thermal conductivity of above-mentioned HGMs at no vacuum and high vacuum in room temperature were measured by a self-made thermal conductivity measuring apparatus which was based on the transient plane source (TPS) method. The experimental results showed that thermal conductivity of HGMs were in the range of 4.2030E-02 to 6.3300E-02 W/m.K (at no vacuum) and 3.8160E-03 to 4.9660E-03 W/m.K (at high vacuum). The results indicated that experimental thermal conductivity was consistent with the calculation results and both of them were all decreased by 8-13 times at vacuum compared with no vacuum. In addition, the relationship with physical properties and thermal conductivity of HGMs has been discussed in detail. In conclusion, HGMs possess excellent thermal insulation performance at high vacuum in room temperature and have potential to further reduce thermal conductivity at the same conditions.
Modeling Climate Change in the Absence of Climate Change Data. Editorial Comment
NASA Technical Reports Server (NTRS)
Skiles, J. W.
1995-01-01
Practitioners of climate change prediction base many of their future climate scenarios on General Circulation Models (GCM's), each model with differing assumptions and parameter requirements. For representing the atmosphere, GCM's typically contain equations for calculating motion of particles, thermodynamics and radiation, and continuity of water vapor. Hydrology and heat balance are usually included for continents, and sea ice and heat balance are included for oceans. The current issue of this journal contains a paper by Van Blarcum et al. (1995) that predicts runoff from nine high-latitude rivers under a doubled CO2 atmosphere. The paper is important since river flow is an indicator variable for climate change. The authors show that precipitation will increase under the imposed perturbations and that owing to higher temperatures earlier in the year that cause the snow pack to melt sooner, runoff will also increase. They base their simulations on output from a GCM coupled with an interesting water routing scheme they have devised. Climate change models have been linked to other models to predict deforestation.
Data management and scientific integration within the Atmospheric Radiation Measurement Program
NASA Technical Reports Server (NTRS)
Gracio, Deborah K.; Hatfield, Larry D.; Yates, Kenneth R.; Voyles, Jimmy W.; Tichler, Joyce L.; Cederwall, Richard T.; Laufersweiler, Mark J.; Leach, Martin J.; Singley, Paul
1995-01-01
The Atmospheric Radiation Measurement (ARM) Program has been developed by the U.S. Department of Energy with the goal to improve the predictive capabilities of General Circulation Models (GCM's) in their treatment of clouds and radiative transfer effects. To achieve this goal, three experimental testbeds were designed for the deployment of instruments that will collect atmospheric data used to drive the GCM's. Each site, known as a Cloud and Radiation Testbed (CART), consists of a highly available, redundant data system for the collection of data from a variety of instrumentation. The first CART site was deployed in April 1992 in the Southern Great Plains (SGP), Lamont, Oklahoma, with the other two sites to follow in September 1995 in the Tropical Western Pacific and in 1997 on the North Slope of Alaska. Approximately 400 MB of data are transferred per day via the Internet from the SGP site to the ARM Experiment Center at Pacific Northwest Laboratory in Richland, Washington. The Experiment Center is central to the ARM data path and provides for the collection, processing, analysis, and delivery of ARM data. Data are received from the CART sites from a variety of instrumentation, observational systems, amd external data sources. The Experiment Center processes these data streams on a continuous basis to provide derived data products to the ARM Science Team in near real-time while providing a three-month running archive of data. A primary requirement of the ARM Program is to preserve and protect all data produced or acquired. This function is performed at Oak Ridge National Laboratory where leading edge technology is employed for the long-term storage of ARM data. The ARM Archive provides access to data for participation outside of the ARM Program. The ARM Program involves a collaborative effort by teams from various DOE National Laboratories, providing multi-disciplinary areas of expertise. This paper will discuss the collaborative methods in which the ARM teams translate the scientific goals of the Program into data products. By combining atmospheric scientists, systems engineers, and software engineers, the ARM Program has successfully designed and developed an environment where advances in understanding the parameterizations of GCM's can be made.
Equilibrium Atmospheric Response to North Atlantic SST Anomalies.
NASA Astrophysics Data System (ADS)
Kushnir, Yochanan; Held, Isaac M.
1996-06-01
The equilibrium general circulation model (GCM) response to sea surface temperature (SST) anomalies in the western North Atlantic region is studied. A coarse resolution GCM, with realistic lower boundary conditions including topography and climatological SST distribution, is integrated in perpetual January and perpetual October modes, distinguished from one another by the strength of the midlatitude westerlies. An SST anomaly with a maximum of 4°C is added to the climatological SST distribution of the model with both positive and negative polarity. These anomaly runs are compared to one another, and to a control integration, to determine the atmospheric response. In all cases warming (cooling) of the midlatitude ocean surface yields a warming (cooling) of the atmosphere over and to the east of the SST anomaly center. The atmospheric temperature change is largest near the surface and decreases upward. Consistent with this simple thermal response, the geopotential height field displays a baroclinic response with a shallow anomalous low somewhat downstream from the warm SST anomaly. The equivalent barotropic, downstream response is weak and not robust. To help interpret the results, the realistic GCM integrations are compared with parallel idealized model runs. The idealized model has full physics and a similar horizontal and vertical resolution, but an all-ocean surface with a single, permanent zonal asymmetry. The idealized and realistic versions of the GCM display compatible response patterns that are qualitatively consistent with stationary, linear, quasigeostrophic theory. However, the idealized model response is stronger and more coherent. The differences between the two model response patterns can be reconciled based on the size of the anomaly, the model treatment of cloud-radiation interaction, and the static stability of the model atmosphere in the vicinity of the SST anomaly. Model results are contrasted with other GCM studies and observations.
NASA Astrophysics Data System (ADS)
Hsu, Juno; Prather, Michael J.; Cameron-Smith, Philip; Veidenbaum, Alex; Nicolau, Alex
2017-07-01
Solar-J is a comprehensive radiative transfer model for the solar spectrum that addresses the needs of both solar heating and photochemistry in Earth system models. Solar-J is a spectral extension of Cloud-J, a standard in many chemical models that calculates photolysis rates in the 0.18-0.8 µm region. The Cloud-J core consists of an eight-stream scattering, plane-parallel radiative transfer solver with corrections for sphericity. Cloud-J uses cloud quadrature to accurately average over correlated cloud layers. It uses the scattering phase function of aerosols and clouds expanded to eighth order and thus avoids isotropic-equivalent approximations prevalent in most solar heating codes. The spectral extension from 0.8 to 12 µm enables calculation of both scattered and absorbed sunlight and thus aerosol direct radiative effects and heating rates throughout the Earth's atmosphere.The Solar-J extension adopts the correlated-k gas absorption bins, primarily water vapor, from the shortwave Rapid Radiative Transfer Model for general circulation model (GCM) applications (RRTMG-SW). Solar-J successfully matches RRTMG-SW's tropospheric heating profile in a clear-sky, aerosol-free, tropical atmosphere. We compare both codes in cloudy atmospheres with a liquid-water stratus cloud and an ice-crystal cirrus cloud. For the stratus cloud, both models use the same physical properties, and we find a systematic low bias of about 3 % in planetary albedo across all solar zenith angles caused by RRTMG-SW's two-stream scattering. Discrepancies with the cirrus cloud using any of RRTMG-SW's three different parameterizations are as large as about 20-40 % depending on the solar zenith angles and occur throughout the atmosphere.Effectively, Solar-J has combined the best components of RRTMG-SW and Cloud-J to build a high-fidelity module for the scattering and absorption of sunlight in the Earth's atmosphere, for which the three major components - wavelength integration, scattering, and averaging over cloud fields - all have comparably small errors. More accurate solutions with Solar-J come with increased computational costs, about 5 times that of RRTMG-SW for a single atmosphere. There are options for reduced costs or computational acceleration that would bring costs down while maintaining improved fidelity and balanced errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Juno; Prather, Michael J.; Cameron-Smith, Philip
Solar-J is a comprehensive radiative transfer model for the solar spectrum that addresses the needs of both solar heating and photochemistry in Earth system models. Solar-J is a spectral extension of Cloud-J, a standard in many chemical models that calculates photolysis rates in the 0.18–0.8 µm region. The Cloud-J core consists of an eight-stream scattering, plane-parallel radiative transfer solver with corrections for sphericity. Cloud-J uses cloud quadrature to accurately average over correlated cloud layers. It uses the scattering phase function of aerosols and clouds expanded to eighth order and thus avoids isotropic-equivalent approximations prevalent in most solar heating codes. Themore » spectral extension from 0.8 to 12 µm enables calculation of both scattered and absorbed sunlight and thus aerosol direct radiative effects and heating rates throughout the Earth's atmosphere. Furthermore, the Solar-J extension adopts the correlated-k gas absorption bins, primarily water vapor, from the shortwave Rapid Radiative Transfer Model for general circulation model (GCM) applications (RRTMG-SW). Solar-J successfully matches RRTMG-SW's tropospheric heating profile in a clear-sky, aerosol-free, tropical atmosphere. Here, we compare both codes in cloudy atmospheres with a liquid-water stratus cloud and an ice-crystal cirrus cloud. For the stratus cloud, both models use the same physical properties, and we find a systematic low bias of about 3 % in planetary albedo across all solar zenith angles caused by RRTMG-SW's two-stream scattering. Discrepancies with the cirrus cloud using any of RRTMG-SW's three different parameterizations are as large as about 20–40 % depending on the solar zenith angles and occur throughout the atmosphere. Effectively, Solar-J has combined the best components of RRTMG-SW and Cloud-J to build a high-fidelity module for the scattering and absorption of sunlight in the Earth's atmosphere, for which the three major components – wavelength integration, scattering, and averaging over cloud fields – all have comparably small errors. More accurate solutions with Solar-J come with increased computational costs, about 5 times that of RRTMG-SW for a single atmosphere. There are options for reduced costs or computational acceleration that would bring costs down while maintaining improved fidelity and balanced errors.« less
Hsu, Juno; Prather, Michael J.; Cameron-Smith, Philip; ...
2017-01-01
Solar-J is a comprehensive radiative transfer model for the solar spectrum that addresses the needs of both solar heating and photochemistry in Earth system models. Solar-J is a spectral extension of Cloud-J, a standard in many chemical models that calculates photolysis rates in the 0.18–0.8 µm region. The Cloud-J core consists of an eight-stream scattering, plane-parallel radiative transfer solver with corrections for sphericity. Cloud-J uses cloud quadrature to accurately average over correlated cloud layers. It uses the scattering phase function of aerosols and clouds expanded to eighth order and thus avoids isotropic-equivalent approximations prevalent in most solar heating codes. Themore » spectral extension from 0.8 to 12 µm enables calculation of both scattered and absorbed sunlight and thus aerosol direct radiative effects and heating rates throughout the Earth's atmosphere. Furthermore, the Solar-J extension adopts the correlated-k gas absorption bins, primarily water vapor, from the shortwave Rapid Radiative Transfer Model for general circulation model (GCM) applications (RRTMG-SW). Solar-J successfully matches RRTMG-SW's tropospheric heating profile in a clear-sky, aerosol-free, tropical atmosphere. Here, we compare both codes in cloudy atmospheres with a liquid-water stratus cloud and an ice-crystal cirrus cloud. For the stratus cloud, both models use the same physical properties, and we find a systematic low bias of about 3 % in planetary albedo across all solar zenith angles caused by RRTMG-SW's two-stream scattering. Discrepancies with the cirrus cloud using any of RRTMG-SW's three different parameterizations are as large as about 20–40 % depending on the solar zenith angles and occur throughout the atmosphere. Effectively, Solar-J has combined the best components of RRTMG-SW and Cloud-J to build a high-fidelity module for the scattering and absorption of sunlight in the Earth's atmosphere, for which the three major components – wavelength integration, scattering, and averaging over cloud fields – all have comparably small errors. More accurate solutions with Solar-J come with increased computational costs, about 5 times that of RRTMG-SW for a single atmosphere. There are options for reduced costs or computational acceleration that would bring costs down while maintaining improved fidelity and balanced errors.« less
Effective Dose Equivalent due to Cosmic Ray Particles and Their Secondary Particles on the Moon
NASA Astrophysics Data System (ADS)
Hayatsu, Kanako; Hareyama, Makoto; Kobayashi, Shingo; Karouji, Yuzuru; Sakurai, K.; Sihver, Lembit; Hasebe, N.
Estimation of radiation dose on and under the lunar surface is quite important for human activity on the Moon and for the future lunar bases construction. Radiation environment on the Moon is much different from that on the Earth. Galactic cosmic rays (GCRs) and solar energetic particles (SEPs) directly penetrate the lunar surface because of no atmosphere and no magnetic field around the Moon. Then, they generate many secondary particles such as neutrons, gamma rays and other charged particles by nuclear interactions with soils and regolith breccias under the lunar surface. Therefore, the estimation of radiation dose from them on the surface and the underground of the Moon are essential for safety human activities. In this study, the effective dose equivalents at the surface and various depths of the Moon were estimated using by the latest cosmic rays observation and developed calculation code. The largest contribution to the dose on the surface is primary charged particles in GCRs and SEPs, while in the ground, secondary neutrons are the most dominant. In particular, the dose from neutrons becomes maximal at 70-80 g/cm2 in depth of lunar soil, because fast neutrons with about 1.0 MeV are mostly produced at this depth and give the largest dose. On the lunar surface, the doses originated from large SEPs are very hazardous. We estimated the effective dose equivalents due to such large SEPs and the effects of aluminum shield for the large flare on the human body. In the presentation, we summarize and discuss the improved calculation results of radiation doses due to GCR particles and their secondary particles in the lunar subsurface. These results will provide useful data for the future exploration of the Moon.
Dose equivalent on the Moon contributed from cosmic rays and their secondary particles
NASA Astrophysics Data System (ADS)
Hayatsu, K.; Hareyama, Makoto; Hasebe, N.; Kobayashi, S.; Yamashita, N.
Estimation of radiation dose on and under the lunar surface is quite important for human activity on the Moon and in the future lunar bases. Radiation environment on the Moon is much different from that on the Earth. Galactic cosmic rays and solar energetic particles directly penetrate the lunar surface because of no atmosphere and no magnetic field around the Moon. Then, those generate many secondary particles such as gamma rays, neutrons and other charged particles by interaction with soils under the lunar surface. Therefore, the estimation of radiation dose from them on the surface and the underground of the Moon are essential for safety human activities. In this study the ambient dose equivalent in the ICRU sphere at the surface and various depths of the Moon is estimated based on the latest galactic cosmic ray spectrum and its generating secondary particles calculated by the Geant4 code. On the surface the most dominant contribution for the dose are not protons and heliums, but heavy components of galactic cosmic rays such as iron, while in the ground, secondary neutrons are the most dominant. In particular, the dose from neutrons becomes maximal at 50 - 100 g/cm2 of lunar soil depth, because fast neutrons with about 1.0 MeV are mostly produced at this depth and give a large dose. On the surface, the dose originated from GCR is quite sensitive for solar cycle activity, while that from secondary neutrons is not so sensitive. Inversely, under the surface, the dose from neutron is much sensitive for solar activity related to the flux of galactic cosmic rays. This difference should be considered to shield cosmic radiation for human activity on the Moon.
NASA Astrophysics Data System (ADS)
Stanfield, R. E.; Dong, X.; Xi, B.; Del Genio, A. D.; Minnis, P.; Doelling, D.; Loeb, N. G.
2011-12-01
To better advise policymakers, it is necessary for climate models to provide credible predictions of future climates. Meeting this goal requires climate models to successfully simulate the present and past climates. The past, current and future Earth climate has been simulated by the NASA GISS ModelE climate model and has been summarized by the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC, AR4, 2007). New simulations from the updated AR5 version of the NASA GISS ModelE GCM have been released to the public community and will be included in the IPCC AR5 ensemble of simulations. Due to the recent nature of these simulations, however, they have yet to be extensively validated against observations. To evaluate the GISS AR5 simulated global clouds and TOA radiation budgets, we have collected and processed the NASA CERES and MODIS observations during the period 2000-2005. In detail, the 1ox1o resolution monthly averaged SYN1 product has been used with combined observations from both Terra and Aqua satellites, and degraded to a 2ox2.5o grid box to match the GCM spatial resolution. These observations are temporally interpolated and fit to data from geostationary satellites to provide time continuity. The GISS AR5 products were downloaded from the CMIP5 (Coupled Model Intercomparison Project Phase 5) for the IPCC-AR5. Preliminary comparisons between GISS AR5 simulations and CERES-MODIS observations have shown that although their annual and seasonal mean CFs agree within a few percent, there are significant differences in several climatic regions. For example, the modeled CFs have positive biases in the Arctic, Antarctic, Tropics, and Sahara Desert, but negative biases over the southern middle latitudes (30-65 oS). The OLR, albedo and NET radiation comparisons are similar to the CF comparison.
Validation of Land-Surface Mosaic Heterogeneity in the GEOS DAS
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Molod, Andrea; Houser, Paul R.; Schubert, Siegfried
1999-01-01
The Mosaic Land-surface Model (LSM) has been included into the current GEOS Data Assimilation System (DAS). The LSM uses a more advanced representation of physical processes than previous versions of the GEOS DAS, including the representation of sub-grid heterogeneity of the land-surface through the Mosaic approach. As a first approximation, Mosaic assumes that all similar surface types within a grid-cell can be lumped together as a single'tile'. Within one GCM grid-cell, there might be 1 - 5 different tiles or surface types. All tiles are subjected to the grid-scale forcing (radiation, air temperature and specific humidity, and precipitation), and the sub-grid variability is a function of the tile characteristics. In this paper, we validate the LSM sub-grid scale variability (tiles) using a variety of surface observing stations from the Southern Great Plains (SGP) site of the Atmospheric Radiation Measurement (ARM) Program. One of the primary goals of SGP ARM is to study the variability of atmospheric radiation within a G,CM grid-cell. Enough surface data has been collected by ARM to extend this goal to sub-grid variability of the land-surface energy and water budgets. The time period of this study is the Summer of 1998 (June I - September 1). The ARM site data consists of surface meteorology, energy flux (eddy correlation and bowen ratio), soil water observations spread over an area similar to the size of a G-CM grid-cell. Various ARM stations are described as wheat and alfalfa crops, pasture and range land. The LSM tiles considered at the grid-space (2 x 2.5) nearest the ARM site include, grassland, deciduous forests, bare soil and dwarf trees. Surface energy and water balances for each tile type are compared with observations. Furthermore, we will discuss the land-surface sub-grid variability of both the ARM observations and the DAS.
NASA Technical Reports Server (NTRS)
Oreopoulos, L.; Chou, M.-D.; Khairoutdinov, M.; Barker, H. W.; Cahalan, R. F.
2003-01-01
We test the performance of the shortwave (SW) and longwave (LW) Column Radiation Models (CORAMs) of Chou and collaborators with heterogeneous cloud fields from a global single-day dataset produced by NCAR's Community Atmospheric Model with a 2-D CRM installed in each gridbox. The original SW version of the CORAM performs quite well compared to reference Independent Column Approximation (ICA) calculations for boundary fluxes, largely due to the success of a combined overlap and cloud scaling parameterization scheme. The absolute magnitude of errors relative to ICA are even smaller for the LW CORAM which applies similar overlap. The vertical distribution of heating and cooling within the atmosphere is also simulated quite well with daily-averaged zonal errors always below 0.3 K/d for SW heating rates and 0.6 K/d for LW cooling rates. The SW CORAM's performance improves by introducing a scheme that accounts for cloud inhomogeneity. These results suggest that previous studies demonstrating the inaccuracy of plane-parallel models may have unfairly focused on worst scenario cases, and that current radiative transfer algorithms of General Circulation Models (GCMs) may be more capable than previously thought in estimating realistic spatial and temporal averages of radiative fluxes, as long as they are provided with correct mean cloud profiles. However, even if the errors of the particular CORAMs are small, they seem to be systematic, and the impact of the biases can be fully assessed only with GCM climate simulations.
NASA Astrophysics Data System (ADS)
Storelvmo, T.
2015-12-01
Substantial improvements have been made to the cloud microphysical schemes used in the latest generation of global climate models (GCMs), however, an outstanding weakness of these schemes lies in the arbitrariness of their tuning parameters. Despite the growing effort in improving the cloud microphysical schemes in GCMs, most of this effort has not focused on improving the ability of GCMs to accurately simulate phase partitioning in mixed-phase clouds. Getting the relative proportion of liquid droplets and ice crystals in clouds right in GCMs is critical for the representation of cloud radiative forcings and cloud-climate feedbacks. Here, we first present satellite observations of cloud phase obtained by NASA's CALIOP instrument, and report on robust statistical relationships between cloud phase and several aerosols species that have been demonstrated to act as ice nuclei (IN) in laboratory studies. We then report on results from model intercomparison projects that reveal that GCMs generally underestimate the amount of supercooled liquid in clouds. For a selected GCM (NCAR 's CAM5), we thereafter show that the underestimate can be attributed to two main factors: i) the presence of IN in the mixed-phase temperature range, and ii) the Wegener-Bergeron-Findeisen process, which converts liquid to ice once ice crystals have formed. Finally, we show that adjusting these two processes such that the GCM's cloud phase is in agreement with the observed has a substantial impact on the simulated radiative forcing due to IN perturbations, as well as on the cloud-climate feedbacks and ultimately climate sensitivity simulated by the GCM.
Global Modeling, Field Campaigns, Upscaling and Ray Desjardins
NASA Technical Reports Server (NTRS)
Sellers, P. J.; Hall, F. G.
2012-01-01
In the early 1980's, it became apparent that land surface radiation and energy budgets were unrealistically represented in Global Circulation models (GCM's), Shortly thereafter, it became clear that the land carbon budget was also poorly represented in Earth System Models (ESM's), A number of scientific communities, including GCM/ESM modelers, micrometeorologists, satellite data specialists and plant physiologists, came together to design field experiments that could be used to develop and validate the contemporary prototype land surface models. These experiments were designed to measure land surface fluxes of radiation, heat, water vapor and CO2 using a network of flux towers and other plot-scale techniques, coincident with satellite measurements of related state variables, The interdisciplinary teams involved in these experiments quickly became aware of the scale gap between plot-scale measurements (approx 10 - 100m), satellite measurements (100m - 10 km), and GCM grid areas (l0 - 200km). At the time, there was no established flux measurement capability to bridge these scale gaps. Then, a Canadian science learn led by Ray Desjardins started to actively participate in the design and execution of the experiments, with airborne eddy correlation providing the radically innovative bridge across the scale gaps, In a succession of brilliantly executed field campaigns followed up by convincing scientific analyses, they demonstrated that airborne eddy correlation allied with satellite data was the most powerful upscaling tool available to the community, The rest is history: the realism and credibility of weather and climate models has been enormously improved enormously over the last 25 years with immense benefits to the public and policymakers.
NASA Technical Reports Server (NTRS)
Hourdin, Frederic; Forget, Francois; Talagrand, O.
1993-01-01
We have been developing a General Circulation Model (GCM) of the martian atmosphere since 1989. The model has been described rather extensively elsewhere and only the main characteristics are given here. The dynamical part of the model, adapted from the LMD terrestrial climate model, is based on a finite-difference formulation of the classical 'primitive equations of meteorology.' The radiative transfer code includes absorption and emission by CO2 (carefully validated by comparison to line-by-line calculations) and dust in the thermal range and absorption and scattering by dust in the visible range. Other physical parameterizations are included: modeling of vertical turbulent mixing, dry convective adjustment (in order to prevent vertical unstable temperature profiles), and a multilayer model of the thermal conduction in the soil. Finally, the condensation-sublimation of CO2 is introduced through specification of a pressure-dependent condensation temperature. The atmospheric and surface temperatures are prevented from falling below this critical temperature by condensation and direct precipitation onto the surface of atmospheric CO2. The only prespecified spatial fields are the surface thermal inertia, albedo, and topography.
Observational and Modeling Studies of Clouds and the Hydrological Cycle
NASA Technical Reports Server (NTRS)
Somerville, Richard C. J.
1997-01-01
Our approach involved validating parameterizations directly against measurements from field programs, and using this validation to tune existing parameterizations and to guide the development of new ones. We have used a single-column model (SCM) to make the link between observations and parameterizations of clouds, including explicit cloud microphysics (e.g., prognostic cloud liquid water used to determine cloud radiative properties). Surface and satellite radiation measurements were used to provide an initial evaluation of the performance of the different parameterizations. The results of this evaluation will then used to develop improved cloud and cloud-radiation schemes, which were tested in GCM experiments.
Status of Middle Atmosphere-Climate Models: Results SPARC-GRIPS
NASA Technical Reports Server (NTRS)
Pawson, Steven; Kodera, Kunihiko
2003-01-01
The middle atmosphere is an important component of the climate system, primarily because of the radiative forcing of ozone. Middle atmospheric ozone can change, over long times, because of changes in the abundance of anthropogenic pollutants which catalytically destroy it, and because of the temperature sensitivity of kinetic reaction rates. There is thus a complex interaction between ozone, involving chemical and climatic mechanisms. One question of interest is how ozone will change over the next decades , as the "greenhouse-gas cooling" of the middle atmosphere increases but the concentrations of chlorine species decreases (because of policy changes). concerns the climate biases in current middle atmosphere-climate models, especially their ability to simulate the correct seasonal cycle at high latitudes, and the existence of temperature biases in the global mean. A major obstacle when addressing this question This paper will present a summary of recent results from the "GCM-Reality Intercomparison Project for SPARC" (GRIPS) initiative. A set of middle atmosphere-climate models has been compared, identifying common biases. Mechanisms for these biases are being studied in some detail, including off-line assessments of the radiation transfer codes and coordinated studies of the impacts of gravity wave drag due to sub-grid-scale processes. ensemble of models will be presented, along with numerical experiments undertaken with one or more models, designed to investigate the mechanisms at work in the atmosphere. The discussion will focus on dynamical and radiative mechanisms in the current climate, but implications for coupled ozone chemistry and the future climate will be assessed.
Spacecraft shielding for a Mars mission
NASA Astrophysics Data System (ADS)
O'Brien, K.
Calculations of the effective radiation dose due to cosmic rays in the interplanetary medium between Earth and Mars show that, as in the atmosphere above the Pfotzer Maximum, the dose rate increases with increasing wall thickness. An unshielded space crew member would receive almost 70 rem (0.70 Sv) a year. The effect of a typically proposed composite space-craft hull of aluminum and polyethylene would increase the dose rate by a few percent. However, 100 g/cm2 of almost any light material would more than double the cosmic radiation exposure of the crew.
Monitoring Carbon Fluxes from Shallow Surface Soils in the Critical Zone
NASA Astrophysics Data System (ADS)
Stielstra, C. M.; Brooks, P. D.; Chorover, J.
2011-12-01
The critical zone (CZ) is the earth's porous near-surface layer, characterized by the integrated processes that occur between the bedrock and the atmospheric boundary layer. Within this area water, atmosphere, ecosystems, and soils interact on a geomorphic and geologic template. We hypothesize that CZ systems organize and evolve in response to open system fluxes of energy and mass, including meteoric inputs of radiation, water, and carbon, which can be quantified at point to watershed scales. The goal of this study is to link above-ground and below-ground carbon processes by quantifying carbon pools and fluxes from near surface soils. Soil CO2 efflux and dissolved organic carbon (DOC) are monitored over a two year period across bedrock type and vegetation type at two seasonally snow covered subalpine catchments in Arizona and New Mexico. We measure the amount of DOC present in surface soils, and install ion exchange resins at the A/B soil horizon interface to capture DOC leachate mobilized during snowmelt and summer rainfall. Throughout the summer rain and spring snowmelt seasons we monitor soil respiration of CO2. Preliminary results show that rates of gaseous carbon flux are significantly higher (p<0.05) from soils with schist bedrock (2.5 ± 0.2 gC/m2/d )than from granite bedrock (1.3 ± 0.1 gC/m2/d), and higher from healthy mixed conifer forests (1.9 ± 0.3 gC/m2/d) than from mixed conifer forests impacted by spruce budworm (1.4 ± 0.1 gC/m2/d). DOC leached from soil samples does not vary significantly with bedrock type; however, spruce budworm impacted forests have significantly higher levels of leachable DOC in surface soils (22.8 ± 4.5 gC/m2) than are found in the soils of healthy forests (10.0 ± 1.5 gC/m2) or subalpine meadows (9.1 ± 0.5 gC/m2). The results of this study will allow us to evaluate the variability of carbon fluxes with vegetation and soil type within a shallow soil carbon pool and help constrain the contributions of soil organic carbon to net carbon balance in CZO catchments with seasonal precipitation regimes.
Sensitivity simulations of superparameterised convection in a general circulation model
NASA Astrophysics Data System (ADS)
Rybka, Harald; Tost, Holger
2015-04-01
Cloud Resolving Models (CRMs) covering a horizontal grid spacing from a few hundred meters up to a few kilometers have been used to explicitly resolve small-scale and mesoscale processes. Special attention has been paid to realistically represent cloud dynamics and cloud microphysics involving cloud droplets, ice crystals, graupel and aerosols. The entire variety of physical processes on the small-scale interacts with the larger-scale circulation and has to be parameterised on the coarse grid of a general circulation model (GCM). Since more than a decade an approach to connect these two types of models which act on different scales has been developed to resolve cloud processes and their interactions with the large-scale flow. The concept is to use an ensemble of CRM grid cells in a 2D or 3D configuration in each grid cell of the GCM to explicitly represent small-scale processes avoiding the use of convection and large-scale cloud parameterisations which are a major source for uncertainties regarding clouds. The idea is commonly known as superparameterisation or cloud-resolving convection parameterisation. This study presents different simulations of an adapted Earth System Model (ESM) connected to a CRM which acts as a superparameterisation. Simulations have been performed with the ECHAM/MESSy atmospheric chemistry (EMAC) model comparing conventional GCM runs (including convection and large-scale cloud parameterisations) with the improved superparameterised EMAC (SP-EMAC) modeling one year with prescribed sea surface temperatures and sea ice content. The sensitivity of atmospheric temperature, precipiation patterns, cloud amount and types is observed changing the embedded CRM represenation (orientation, width, no. of CRM cells, 2D vs. 3D). Additionally, we also evaluate the radiation balance with the new model configuration, and systematically analyse the impact of tunable parameters on the radiation budget and hydrological cycle. Furthermore, the subgrid variability (individual CRM cell output) is analysed in order to illustrate the importance of a highly varying atmospheric structure inside a single GCM grid box. Finally, the convective transport of Radon is observed comparing different transport procedures and their influence on the vertical tracer distribution.
NASA Astrophysics Data System (ADS)
Grosheintz, Luc; Mendonça, João; Käppeli, Roger; Lukas Grimm, Simon; Mishra, Siddhartha; Heng, Kevin
2015-12-01
In this talk, I will present THOR, the first fully conservative, GPU-accelerated exo-GCM (general circulation model) on a nearly uniform, global grid that treats shocks and is non-hydrostatic. THOR will be freely available to the community as a standard tool.Unlike most GCMs THOR solves the full, non-hydrostatic Euler equations instead of the primitive equations. The equations are solved on a global three-dimensional icosahedral grid by a second order Finite Volume Method (FVM). Icosahedral grids are nearly uniform refinements of an icosahedron. We've implemented three different versions of this grid. FVM conserves the prognostic variables (density, momentum and energy) exactly and doesn't require a diffusion term (artificial viscosity) in the Euler equations to stabilize our solver. Historically FVM was designed to treat discontinuities correctly. Hence it excels at resolving shocks, including those present in hot exoplanetary atmospheres.Atmospheres are generally in near hydrostatic equilibrium. We therefore implement a well-balancing technique recently developed at the ETH Zurich. This well-balancing ensures that our FVM maintains hydrostatic equilibrium to machine precision. Better yet, it is able to resolve pressure perturbations from this equilibrium as small as one part in 100'000. It is important to realize that these perturbations are significantly smaller than the truncation error of the same scheme without well-balancing. If during the course of the simulation (due to forcing) the atmosphere becomes non-hydrostatic, our solver continues to function correctly.THOR just passed an important mile stone. We've implemented the explicit part of the solver. The explicit solver is useful to study instabilities or local problems on relatively short time scales. I'll show some nice properties of the explicit THOR. An explicit solver is not appropriate for climate study because the time step is limited by the sound speed. Therefore, we are working on the first fully implicit GCM. By ESS3, I hope to present results for the advection equation.THOR is part of the Exoclimes Simulation Platform (ESP), a set of open-source community codes for simulating and understanding the atmospheres of exoplanets. The ESP also includes tools for radiative transfer and retrieval (HELIOS), an opacity calculator (HELIOS-K), and a chemical kinetics solver (VULCAN). We expect to publicly release an initial version of THOR in 2016 on www.exoclime.org.
A long-term simulation of forest carbon fluxes over the Qilian Mountains
NASA Astrophysics Data System (ADS)
Yan, Min; Tian, Xin; Li, Zengyuan; Chen, Erxue; Li, Chunmei; Fan, Wenwu
2016-10-01
In this work, we integrated a remote-sensing-based (the MODIS MOD_17 Gross Primary Productivity (GPP) model (MOD_17)) and a process-based (the Biome-BioGeochemical Cycles (Biome-BGC) model) ecological model in order to estimate long-term (from 2000 to 2012) forest carbon fluxes over the Qilian Mountains in northwest China, a cold and arid forest ecosystem. Our goal was to obtain an accurate and quantitative simulation of spatial GPP patterns using the MOD_17 model and a temporal description of forest processes using the Biome-BGC model. The original MOD_17 model was first optimized using a biome-specific parameter, observed meteorological data, and reproduced fPAR at the eddy covariance site. The optimized MOD_17 model performed much better (R2 = 0.91, RMSE = 5.19 gC/m2/8d) than the original model (R2 = 0.47, RMSE = 20.27 gC/m2/8d). The Biome-BGC model was then calibrated using GPP for 30 representative forest plots selected from the optimized MOD_17 model. The calibrated Biome-BGC model was then driven in order to estimate forest GPP, net primary productivity (NPP), and net ecosystem exchange (NEE). GPP and NEE were validated against two-year (2010 and 2011) EC measurements (R2 = 0.79, RMSE = 1.15 gC/m2/d for GPP; and R2 = 0.69, RMSE = 1.087 gC/m2/d for NEE). NPP estimates from 2000 to 2012 were then compared to dendrochronological measurements (R2 = 0.73, RMSE = 24.46 gC/m2/yr). Our results indicated that integration of the two models can be used for estimating carbon fluxes with good accuracy and a high temporal and spatial resolution. Overall, NPP displayed a downward trend, with an average rate of 0.39 gC/m2/yr, from 2000 and 2012 over the Qilian Mountains. Simulated average annual NPP yielded higher values for the southeast as compared to the northwest. The most positive correlative climatic factor to average annual NPP was downward shortwave radiation. The vapor pressure deficit, and mean temperature and precipitation yielded negative correlations to average annual NPP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, David L.
It is well known that cirrus clouds play a major role in regulating the earth’s climate, but the details of how this works are just beginning to be understood. This project targeted the main property of cirrus clouds that influence climate processes; the ice fall speed. That is, this project improves the representation of the mass-weighted ice particle fall velocity, V m, in climate models, used to predict future climate on global and regional scales. Prior to 2007, the dominant sizes of ice particles in cirrus clouds were poorly understood, making it virtually impossible to predict how cirrus clouds interactmore » with sunlight and thermal radiation. Due to several studies investigating the performance of optical probes used to measure the ice particle size distribution (PSD), as well as the remote sensing results from our last ARM project, it is now well established that the anomalously high concentrations of small ice crystals often reported prior to 2007 were measurement artifacts. Advances in the design and data processing of optical probes have greatly reduced these ice artifacts that resulted from the shattering of ice particles on the probe tips and/or inlet tube, and PSD measurements from one of these improved probes (the 2-dimensional Stereo or 2D-S probe) are utilized in this project to parameterize V m for climate models. Our original plan in the proposal was to parameterize the ice PSD (in terms of temperature and ice water content) and ice particle mass and projected area (in terms of mass- and area-dimensional power laws or m-D/A-D expressions) since these are the microphysical properties that determine V m, and then proceed to calculate V m from these parameterized properties. But the 2D-S probe directly measures ice particle projected area and indirectly estimates ice particle mass for each size bin. It soon became apparent that the original plan would introduce more uncertainty in the V m calculations than simply using the 2D-S measurements to directly calculate V m. By calculating V m directly from the measured PSD, ice particle projected area and estimated mass, more accurate estimates of V m are obtained. These V m values were then parameterized for climate models by relating them to (1) sampling temperature and ice water content (IWC) and (2) the effective diameter (D e) of the ice PSD. Parameterization (1) is appropriate for climate models having single-moment microphysical schemes whereas (2) is appropriate for double-moment microphysical schemes and yields more accurate V m estimates. These parameterizations were developed for tropical cirrus clouds, Arctic cirrus, mid-latitude synoptic cirrus and mid-latitude anvil cirrus clouds based on field campaigns in these regions. An important but unexpected result of this research was the discovery of microphysical evidence indicating the mechanisms by which ice crystals are produced in cirrus clouds. This evidence, derived from PSD measurements, indicates that homogeneous freezing ice nucleation dominates in mid-latitude synoptic cirrus clouds, whereas heterogeneous ice nucleation processes dominate in mid-latitude anvil cirrus. Based on these findings, D e was parameterized in terms of temperature (T) for conditions dominated by (1) homo- and (2) heterogeneous ice nucleation. From this, an experiment was designed for global climate models (GCMs). The net radiative forcing from cirrus clouds may be affected by the means ice is produced (homo- or heterogeneously), and this net forcing contributes to climate sensitivity (i.e. the change in mean global surface temperature resulting from a doubling of CO 2). The objective of this GCM experiment was to determine how a change in ice nucleation mode affects the predicted global radiation balance. In the first simulation (Run 1), the D e-T relationship for homogeneous nucleation is used at all latitudes, while in the second simulation (Run 2), the D e-T relationship for heterogeneous nucleation is used at all latitudes. For both runs, V m is calculated from D e. Two GCMs were used; the Community Atmosphere Model version 5 (CAM5) and a European GCM known as ECHAM5 (thanks to our European colleagues who collaborated with us). Similar results were obtained from both GCMs in the Northern Hemisphere mid-latitudes, with a net cooling of ~ 1.0 W m -2 due to heterogeneous nucleation, relative to Run 1. The mean global net cooling was 2.4 W m -2 for the ECHAM5 GCM while CAM5 produced a mean global net cooling of about 0.8 W m -2. This dependence of the radiation balance on nucleation mode is substantial when one considers the direct radiative forcing from a CO 2 doubling is 4 W m -2. The differences between GCMs in mean global net cooling estimates may demonstrate a need for improving the representation of cirrus clouds in GCMs, including the coupling between microphysical and radiative properties. Unfortunately, after completing this GCM experiment, we learned from the company that provided the 2D-S microphysical data that the data was corrupted due to a computer program coding problem. Therefore the microphysical data had to be reprocessed and reanalyzed, and the GCM experiments were redone under our current ASR project but using an improved experimental design.« less
NASA Technical Reports Server (NTRS)
Dudek, Michael P.; Wang, Wei-Chyung; Liang, Xin-Zhong; Li, Zhu
1994-01-01
The total ozone mapping spectrometer (TOMS) and stratospheric aerosol and gas experiment (SAGE) measurements show a significant reduction in the stratospheric ozone over the middle and high latitudes of both hemispheres between the years 1979 and 1991 (WMO, 1992). This change in ozone will effect both the solar and longwave radiation with climate implications. However, recent studies (Ramaswamy et al., 1992; WMO, 1992) indicate that the net effect depends not only on latitudes and seasons, but also on the response of the lower stratospheric temperature. In this study we use a general circulation model (GCM) to calculate the climatic effect due to stratospheric ozone depletion and compare the effect with that due to observed increases of trace gases CO2, CH4, N2O, and CFC's for the period 1980-1990. In the simulations, we use the observed changes in ozone derived from the TOMS data. The GCM used is a version of the NCAR community climate model referenced in Wang et al. (1991). For the present study we run the model in perpetual January and perpetual July modes in which the incoming solar radiation and climatological sea surface temperatures are held constant.
Observed Reduction In Surface Solar Radiation - Aerosol Forcing Versus Cloud Feedback?
NASA Astrophysics Data System (ADS)
Liepert, B.
The solar radiation reaching the ground is a key parameter for the climate system. It drives the hydrological cycle and numerous biological processes. Surface solar radi- ation revealed an estimated 7W/m2 or 4% decline at sites worldwide from 1961 to 1990. The strongest decline occurred at the United States sites with 19W/m2 or 10%. Increasing air pollution and hence direct and indirect aerosol effect, as we know today can only explain part of the reduction in solar radiation. Increasing cloud optical thick- ness - possibly due to global warming - is a more likely explanation for the observed reduction in solar radiation in the United States. The analysis of surface solar radiation data will be shown and compared with GCM results of the direct and indirect aerosol effect. It will be argued that the residual declines in surface solar radiation is likely due to cloud feedback.
NASA Astrophysics Data System (ADS)
Gu, B.; Yang, P.; Kuo, C. P.; Mlawer, E. J.
2017-12-01
Evaluation of RRTMG and Fu-Liou RTM Performance against LBLRTM-DISORT Simulations and CERES Data in terms of Ice Clouds Radiative Effects Boyan Gu1, Ping Yang1, Chia-Pang Kuo1, Eli J. Mlawer2 Department of Atmospheric Sciences, Texas A&M University, College Station, TX 77843, USA Atmospheric and Environmental Research (AER), Lexington, MA 02421, USA Ice clouds play an important role in climate system, especially in the Earth's radiation balance and hydrological cycle. However, the representation of ice cloud radiative effects (CRE) remains significant uncertainty, because scattering properties of ice clouds are not well considered in general circulation models (GCM). We analyze the strengths and weakness of the Rapid Radiative Transfer Model for GCM Applications (RRTMG) and Fu-Liou Radiative Transfer Model (RTM) against rigorous LBLRTM-DISORT (a combination of Line-By-Line Radiative Transfer Model and Discrete Ordinate Radiative Transfer Model) calculations and CERES (Clouds and the Earth's Radiant Energy System) flux observations. In total, 6 US standard atmospheric profiles and 42 atmospheric profiles from Atmospheric and Environmental Research (AER) Company are used to evaluate the RRTMG and Fu-Liou RTM by LBLRTM-DISORT calculations from 0 to 3250 cm-1. Ice cloud radiative effect simulations with RRTMG and Fu-Liou RTM are initialized using the ice cloud properties from MODIS collection-6 products. Simulations of single layer ice cloud CRE by RRTMG and LBLRTM-DISORT show that RRTMG, neglecting scattering, overestimates the TOA flux by about 0-15 W/m2 depending on the cloud particle size and optical depth, and the most significant overestimation occurs when the particle effective radius is small (around 10 μm) and the cloud optical depth is intermediate (about 1-10). The overestimation reduces significantly when the similarity rule is applied to RRTMG. We combine ice cloud properties from MODIS Collection-6 and atmospheric profiles from the Modern-Era Retrospective Analysis for Research and Applications-2 (MERRA2) reanalysis to simulate ice cloud CRE, which is compared with CERES observations.
NASA Technical Reports Server (NTRS)
Chen, Wei-Ting; Liao, Hong; Seinfeld, John H.
2007-01-01
Long-lived greenhouse gases (GHGs) are the most important driver of climate change over the next century. Aerosols and tropospheric ozone (O3) are expected to induce significant perturbations to the GHG-forced climate. To distinguish the equilibrium climate responses to changes in direct radiative forcing of anthropogenic aerosols, tropospheric ozone, and GHG between present day and year 2100, four 80-year equilibrium climates are simulated using a unified tropospheric chemistry-aerosol model within the Goddard Institute for Space Studies (GISS) general circulation model (GCM) 110. Concentrations of sulfate, nitrate, primary organic (POA) carbon, secondary organic (SOA) carbon, black carbon (BC) aerosols, and tropospheric ozone for present day and year 2100 are obtained a priori by coupled chemistry-aerosol GCM simulations, with emissions of aerosols, ozone, and precursors based on the Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenario (SRES) A2. Changing anthropogenic aerosols, tropospheric ozone, and GHG from present day to year 2100 is predicted to perturb the global annual mean radiative forcing by +0.18 (considering aerosol direct effects only), +0.65, and +6.54 W m(sup -2) at the tropopause, and to induce an equilibrium global annual mean surface temperature change of +0.14, +0.32, and +5.31 K, respectively, with the largest temperature response occurring at northern high latitudes. Anthropogenic aerosols, through their direct effect, are predicted to alter the Hadley circulation owing to an increasing interhemispheric temperature gradient, leading to changes in tropical precipitation. When changes in both aerosols and tropospheric ozone are considered, the predicted patterns of change in global circulation and the hydrological cycle are similar to those induced by aerosols alone. GHG-induced climate changes, such as amplified warming over high latitudes, weakened Hadley circulation, and increasing precipitation over the Tropics and high latitudes, are consistent with predictions of a number of previous GCM studies. Finally, direct radiative forcing of anthropogenic aerosols is predicted to induce strong regional cooling over East and South Asia. Wintertime rainfall over southeastern China and the Indian subcontinent is predicted to decrease because of the increased atmospheric stability and decreased surface evaporation, while the geographic distribution of precipitation is also predicted to be altered as a result of aerosol-induced changes in wind flow.
NASA Technical Reports Server (NTRS)
Schwemmer, Geary K.; Miller, David O.
2005-01-01
Clouds have a powerful influence on atmospheric radiative transfer and hence are crucial to understanding and interpreting the exchange of radiation between the Earth's surface, the atmosphere, and space. Because clouds are highly variable in space, time and physical makeup, it is important to be able to observe them in three dimensions (3-D) with sufficient resolution that the data can be used to generate and validate parameterizations of cloud fields at the resolution scale of global climate models (GCMs). Simulation of photon transport in three dimensionally inhomogeneous cloud fields show that spatial inhomogeneities tend to decrease cloud reflection and absorption and increase direct and diffuse transmission, Therefore it is an important task to characterize cloud spatial structures in three dimensions on the scale of GCM grid elements. In order to validate cloud parameterizations that represent the ensemble, or mean and variance of cloud properties within a GCM grid element, measurements of the parameters must be obtained on a much finer scale so that the statistics on those measurements are truly representative. High spatial sampling resolution is required, on the order of 1 km or less. Since the radiation fields respond almost instantaneously to changes in the cloud field, and clouds changes occur on scales of seconds and less when viewed on scales of approximately 100m, the temporal resolution of cloud properties should be measured and characterized on second time scales. GCM time steps are typically on the order of an hour, but in order to obtain sufficient statistical representations of cloud properties in the parameterizations that are used as model inputs, averaged values of cloud properties should be calculated on time scales on the order of 10-100 s. The Holographic Airborne Rotating Lidar Instrument Experiment (HARLIE) provides exceptional temporal (100 ms) and spatial (30 m) resolution measurements of aerosol and cloud backscatter in three dimensions. HARLIE was used in a ground-based configuration in several recent field campaigns. Principal data products include aerosol backscatter profiles, boundary layer heights, entrainment zone thickness, cloud fraction as a function of altitude and horizontal wind vector profiles based on correlating the motions of clouds and aerosol structures across portions of the scan. Comparisons will be made between various cloud detecting instruments to develop a baseline performance metric.
Comparison of space radiation calculations for deterministic and Monte Carlo transport codes
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo
For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koll, Daniel D. B.; Abbot, Dorian S., E-mail: dkoll@uchicago.edu
Next-generation space telescopes will observe the atmospheres of rocky planets orbiting nearby M-dwarfs. Understanding these observations will require well-developed theory in addition to numerical simulations. Here we present theoretical models for the temperature structure and atmospheric circulation of dry, tidally locked rocky exoplanets with gray radiative transfer and test them using a general circulation model (GCM). First, we develop a radiative-convective (RC) model that captures surface temperatures of slowly rotating and cool atmospheres. Second, we show that the atmospheric circulation acts as a global heat engine, which places strong constraints on large-scale wind speeds. Third, we develop an RC-subsiding modelmore » which extends our RC model to hot and thin atmospheres. We find that rocky planets develop large day–night temperature gradients at a ratio of wave-to-radiative timescales up to two orders of magnitude smaller than the value suggested by work on hot Jupiters. The small ratio is due to the heat engine inefficiency and asymmetry between updrafts and subsidence in convecting atmospheres. Fourth, we show, using GCM simulations, that rotation only has a strong effect on temperature structure if the atmosphere is hot or thin. Our models let us map out atmospheric scenarios for planets such as GJ 1132b, and show how thermal phase curves could constrain them. Measuring phase curves of short-period planets will require similar amounts of time on the James Webb Space Telescope as detecting molecules via transit spectroscopy, so future observations should pursue both techniques.« less
NASA Astrophysics Data System (ADS)
Ding, Y. H.; Hu, S. X.
2017-10-01
Beryllium has been considered a superior ablator material for inertial confinement fusion target designs. Based on density-functional-theory calculations, we have established a wide-range beryllium equation-of-state (EOS) table of density ρ = 0.001 to ρ = 500 g/cm3 and temperature T = 2000 to 108 K. Our first-principles equation-of-state (FPEOS) table is in better agreement with widely used SESAMEEOS table (SESAME2023) than the average-atom INFERNOmodel and the Purgatoriomodel. For the principal Hugoniot, our FPEOS prediction shows 10% stiffer behavior than the last two models at maximum compression. Comparisons between FPEOS and SESAMEfor off-Hugoniot conditions show that both the pressure and internal energy differences are within 20% between two EOS tables. By implementing the FPEOS table into the 1-D radiation-hydrodynamics code LILAC, we studied the EOS effects on beryllium target-shell implosions. The FPEOS simulation predicts up to an 15% higher neutron yield compared to the simulation using the SESAME2023 EOS table. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
Can We Use Single-Column Models for Understanding the Boundary Layer Cloud-Climate Feedback?
NASA Astrophysics Data System (ADS)
Dal Gesso, S.; Neggers, R. A. J.
2018-02-01
This study explores how to drive Single-Column Models (SCMs) with existing data sets of General Circulation Model (GCM) outputs, with the aim of studying the boundary layer cloud response to climate change in the marine subtropical trade wind regime. The EC-EARTH SCM is driven with the large-scale tendencies and boundary conditions as derived from two different data sets, consisting of high-frequency outputs of GCM simulations. SCM simulations are performed near Barbados Cloud Observatory in the dry season (January-April), when fair-weather cumulus is the dominant low-cloud regime. This climate regime is characterized by a near equilibrium in the free troposphere between the long-wave radiative cooling and the large-scale advection of warm air. In the SCM, this equilibrium is ensured by scaling the monthly mean dynamical tendency of temperature and humidity such that it balances that of the model physics in the free troposphere. In this setup, the high-frequency variability in the forcing is maintained, and the boundary layer physics acts freely. This technique yields representative cloud amount and structure in the SCM for the current climate. Furthermore, the cloud response to a sea surface warming of 4 K as produced by the SCM is consistent with that of the forcing GCM.
Atmospheric Teleconnection over Eurasia Induced by Aerosol Radiative Forcing During Boreal Spring
NASA Technical Reports Server (NTRS)
Kim, Maeng-Ki; Lau, K. M.; Chin, Mian; Kim, Kyu-Myong; Sud, Y. C.; Walker, Greg K.
2005-01-01
The direct effects of aerosols on global and regional climate during boreal spring are investigated based on simulations using the NASA Global Modeling and Assimilation Office (GMAO) finite-volume general circulation model (fvGCM) with Microphyics of clouds in Relaxed Arakawa Schubert Scheme (McRAS). The aerosol loading are prescribed from three-dimensional monthly distribution of tropospheric aerosols viz., sulfate, black carbon, organic carbon, soil dust, and sea salt from output of the Goddard Ozone Chemistry Aerosol Radiation and Transport model (GOCART). The aerosol extinction coefficient, single scattering albedo, and asymmetric factor are computed as wavelength-dependent radiative forcing in the radiative transfer scheme of the fvGCM, and as a function of the aerosol loading and ambient relative humidity. We find that anomalous atmospheric heat sources induced by absorbing aerosols (dust and black carbon) excites a planetary scale teleconnection pattern in sea level pressure, temperature and geopotential height spanning North Africa through Eurasia to the North Pacific. Surface cooling due to direct effects of aerosols is found in the vicinity and downstream of the aerosol source regions, i.e., South Asia, East Asia, and northern and western Africa. Additionally, atmospheric heating is found in regions with large loading of dust (over Northern Africa, and Middle East), and black carbon (over South-East Asia). Paradoxically, the most pronounced feature in aerosol-induced surface temperature is an east-west dipole anomaly with strong cooling over the Caspian Sea, and warming over central and northeastern Asia, where aerosol concentration are low. Analyses of circulation anomalies show that the dipole anomaly is a part of an atmospheric teleconnection driven by atmospheric heating anomalies induced by absorbing aerosols in the source regions, but the influence was conveyed globally through barotropic energy dispersion and sustained by feedback processes associated with the regional circulations.
Git as an Encrypted Distributed Version Control System
2015-03-01
options. The algorithm uses AES- 256 counter mode with an IV derived from SHA -1-HMAC hash (this is nearly identical to the GCM mode discussed earlier...built into the internal structure of Git. Every file in a Git repository is check summed with a SHA -1 hash, a one-way function with arbitrarily long...implementation. Git-encrypt calls OpenSSL cryptography library command line functions. The default cipher used is AES- 256 - Electronic Code Book (ECB), which is
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.
2016-01-01
With the recent development of multi-dimensional thermal protection system (TPS) material response codes, the capability to account for surface-to-surface radiation exchange in complex geometries is critical. This paper presents recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute geometric view factors for radiation problems involving multiple surfaces. Verification of the code's radiation capabilities and results of a code-to-code comparison are presented. Finally, a demonstration case of a two-dimensional ablating cavity with enclosure radiation accounting for a changing geometry is shown.
Validating solar and solar-like star opacities
NASA Astrophysics Data System (ADS)
Le Pennec, Maëlle; TURCK-CHIEZE, Sylvaine; RIBEYRE, Xavier; DUCRET, Jean-Eric; BLANCARD, Christophe; COSSE, Philippe; MONDET, Guillaume; FAUSSURIER, Gérald
2015-08-01
The Sun is, as being our closest star, a privilege place to test and validate physics. However, the solar physics is not yet completely understood. Indeed, since the recent update of the solar composition, there are differences between solar models and seismic data, visible on the solar sound speed profile comparison. This well established large discrepancy (Turck-Chièze et al. 2001, 2004, 2011, Christensen-Dalsgaard et al. 2009, Basu et al. 2015) could be linked to radiative transfer issue.Two directions of investigation are proposed. One possibility to explain this gap could be that the Sun produces slightly more energy that it liberates on its surface (around 5%). This energy could be transformed into macroscopic motions in the radiative zone, which are not taken into account in the solar standard model. Another explanation could be that the calculations of energy transport are not correctly taken into account either on the calculation of the Rosseland mean values or in the treatment of the radiative acceleration. This could have an impact on the determination of the internal solar abundances.Unfortunately, there are very few experiments to validate these calculations (Bailey et al. 2014). That's why we are proposing an opacity experiment on a high-energy laser like LMJ, in the conditions of the radiative zone (T=[2 - 15.106 K] and ρ=[0.2 - 150 g/cm3]). The aim is to measure the opacity of the most important contributors to the global opacity in this solar area : iron, oxygen and silicon. We are exploiting in that purpose a technical approach called the Double Ablation Front. During the laser-plasma interaction, the plasma radiative effects allow to reach these high temperatures and densities at LTE and validate or not plasma effects and line widths. We will show the principle of this technique and the results of our simulations on several elements.In the mean time, we are also exploiting new opacity calculations thanks to the OPAS code (Blancard et al. 2012) at the conditions of the solar radiative zone. We will show the impact of these calculations on the solar model.
A post-new horizons global climate model of Pluto including the N2, CH4 and CO cycles
NASA Astrophysics Data System (ADS)
Forget, F.; Bertrand, T.; Vangvichith, M.; Leconte, J.; Millour, E.; Lellouch, E.
2017-05-01
We have built a new 3D Global Climate Model (GCM) to simulate Pluto as observed by New Horizons in 2015. All key processes are parametrized on the basis of theoretical equations, including atmospheric dynamics and transport, turbulence, radiative transfer, molecular conduction, as well as phases changes for N2, CH2 and CO. Pluto's climate and ice cycles are found to be very sensitive to model parameters and initial states. Nevertheless, a reference simulation is designed by running a fast, reduced version of the GCM with simplified atmospheric transport for 40,000 Earth years to initialize the surface ice distribution and sub-surface temperatures, from which a 28-Earth-year full GCM simulation is performed. Assuming a topographic depression in a Sputnik-planum (SP)-like crater on the anti-Charon hemisphere, a realistic Pluto is obtained, with most N2 and CO ices accumulated in the crater, methane frost covering both hemispheres except for the equatorial regions, and a surface pressure near 1.1 Pa in 2015 with an increase between 1988 and 2015, as reported from stellar occultations. Temperature profiles are in qualitative agreement with the observations. In particular, a cold atmospheric layer is obtained in the lowest kilometers above Sputnik Planum, as observed by New Horizons's REX experiment. It is shown to result from the combined effect of the topographic depression and N2 daytime sublimation. In the reference simulation with surface N2 ice exclusively present in Sputnik Planum, the global circulation is only forced by radiative heating gradients and remains relatively weak. Surface winds are locally induced by topography slopes and by N2 condensation and sublimation around Sputnik Planum. However, the circulation can be more intense depending on the exact distribution of surface N2 frost. This is illustrated in an alternative simulation with N2 condensing in the South Polar regions and N2 frost covering latitudes between 35°N and 48°N. A global condensation flow is then created, inducing strong surface winds everywhere, a prograde jet in the southern high latitudes, and an equatorial superrotation likely forced by barotropic instabilities in the southern jet. Using realistic parameters, the GCM predict atmospheric concentrations of CO and CH4 in good agreement with the observations. N2 and CO do not condense in the atmosphere, but CH4 ice clouds can form during daytime at low altitude near the regions covered by N2 ice (assuming that nucleation is efficient enough). This global climate model can be used to study many aspects of the Pluto environment. For instance, organic hazes are included in the GCM and analysed in a companion paper (Bertrand and Forget, Icarus, this issue).
NASA Astrophysics Data System (ADS)
Montes, Carlo; Kiang, Nancy Y.; Ni-Meister, Wenge; Yang, Wenze; Schaaf, Crystal; Aleinov, Igor; Jonas, Jeffrey A.; Zhao, Feng; Yao, Tian; Wang, Zhuosen; Sun, Qingsong; Carrer, Dominique
2016-04-01
Processes determining biosphere-atmosphere coupling are strongly influenced by vegetation structure. Thus, ecosystem carbon sequestration and evapotranspiration affecting global carbon and water balances will depend upon the spatial extent of vegetation, its vertical structure, and its physiological variability. To represent this globally, Dynamic Global Vegetation Models (DGVMs) coupled to General Circulation Models (GCMs) make use of satellite and/or model-based vegetation classifications often composed by homogeneous communities. This work aims at developing a new Global Vegetation Structure Dataset (GVSD) by incorporating varying vegetation heights for mixed plant communities to be used as boundary conditions to the Analytical Clumped Two-Stream (ACTS) canopy radiative transfer scheme (Ni-Meister et al., 2010) incorporated into the NASA Ent Terrestrial Biosphere Model (TBM), the DGVM coupled to the NASA Goddard Institute for Space Studies (GISS) GCM. Information sources about land surface and vegetation characteristics obtained from a number of earth observation platforms and algorithms include the Moderate Resolution Imaging Spectroradiometer (MODIS) land cover and plant functional types (PFTs) (Friedl et al., 2010), soil albedo derived from MODIS (Carrer et al., 2014), along with vegetation height from the Geoscience Laser Altimeter System (GLAS) on board ICESat (Ice, Cloud, and land Elevation Satellite) (Simard et al., 2011; Tang et al., 2014). Three widely used Leaf Area Index (LAI) products are compared as input to the GVSD and ACTS forcing in terms of vegetation albedo: Global Data Sets of Vegetation (LAI)3g (Zhu et al. 2013), Beijing Normal University LAI (Yuan et al., 2011), and MODIS MOD15A2H product (Yang et al., 2006). Further PFT partitioning is performed according to a climate classification utilizing the Climate Research Unit (CRU; Harris et al., 2013) and the NOAA Global Precipitation Climatology Centre (GPCC; Scheider et al., 2014) data. Final products are a GVSD consisting of mixed plant communities (e.g. mixed forests, savannas, mixed PFTs) following the Ecosystem Demography model (Moorcroft et al., 2001) approach represented by multi-cohort community patches at the sub-grid level of the GCM, which are ensembles of identical individuals whose differences are represented by PFTs, canopy height, density and vegetation structure sensitivity to allometric parameters. The performance of the Ent TBM in estimating VIS-NIR vegetation albedo by the new GVSD and ACTS is assessed first by comparison against the previous GISS GCM vegetation classification and prescribed Lambertian albedoes of Matthews (1984), and secondly, against MODIS global estimations and FLUXNET site-scale observations. Ultimately, this GVSD will serve as a template for community data sets, and be used as boundary conditions to the Ent TBM for prediction of biomass, carbon balances and GISS GCM climate.
A Simple Climate Model Program for High School Education
NASA Astrophysics Data System (ADS)
Dommenget, D.
2012-04-01
The future climate change projections of the IPCC AR4 are based on GCM simulations, which give a distinct global warming pattern, with an arctic winter amplification, an equilibrium land sea contrast and an inter-hemispheric warming gradient. While these simulations are the most important tool of the IPCC predictions, the conceptual understanding of these predicted structures of climate change are very difficult to reach if only based on these highly complex GCM simulations and they are not accessible for ordinary people. In this study presented here we will introduce a very simple gridded globally resolved energy balance model based on strongly simplified physical processes, which is capable of simulating the main characteristics of global warming. The model shall give a bridge between the 1-dimensional energy balance models and the fully coupled 4-dimensional complex GCMs. It runs on standard PC computers computing globally resolved climate simulation with 2yrs per second or 100,000yrs per day. The program can compute typical global warming scenarios in a few minutes on a standard PC. The computer code is only 730 line long with very simple formulations that high school students should be able to understand. The simple model's climate sensitivity and the spatial structure of the warming pattern is within the uncertainties of the IPCC AR4 models simulations. It is capable of simulating the arctic winter amplification, the equilibrium land sea contrast and the inter-hemispheric warming gradient with good agreement to the IPCC AR4 models in amplitude and structure. The program can be used to do sensitivity studies in which students can change something (e.g. reduce the solar radiation, take away the clouds or make snow black) and see how it effects the climate or the climate response to changes in greenhouse gases. This program is available for every one and could be the basis for high school education. Partners for a high school project are wanted!
2009-01-01
proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and...radiation transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the...same dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S; Le, Q; Mutaf, Y
2015-06-15
Purpose: To assess dose calculation accuracy of cone-beam CT (CBCT) based treatment plans using a patient-specific stepwise CT-density conversion table in comparison to conventional CT-based treatment plans. Methods: Unlike CT-based treatment planning which use fixed CT-density table, this study used patient-specific CT-density table to minimize the errors in reconstructed mass densities due to the effects of CBCT Hounsfield unit (HU) uncertainties. The patient-specific CT-density table was a stepwise function which maps HUs to only 6 classes of materials with different mass densities: air (0.00121g/cm3), lung (0.26g/cm3), adipose (0.95g/cm3), tissue (1.05 g/cm3), cartilage/bone (1.6g/cm3), and other (3g/cm3). HU thresholds to definemore » different materials were adjusted for each CBCT via best match with the known tissue types in these images. Dose distributions were compared between CT-based plans and CBCT-based plans (IMRT/VMAT) for four types of treatment sites: head and neck (HN), lung, pancreas, and pelvis. For dosimetric comparison, PTV mean dose in both plans were compared. A gamma analysis was also performed to directly compare dosimetry in the two plans. Results: Compared to CT-based plans, the differences for PTV mean dose were 0.1% for pelvis, 1.1% for pancreas, 1.8% for lung, and −2.5% for HN in CBCT-based plans. The gamma passing rate was 99.8% for pelvis, 99.6% for pancreas, and 99.3% for lung with 3%/3mm criteria, and 80.5% for head and neck with 5%/3mm criteria. Different dosimetry accuracy level was observed: 1% for pelvis, 3% for lung and pancreas, and 5% for head and neck. Conclusion: By converting CBCT data to 6 classes of materials for dose calculation, 3% of dose calculation accuracy can be achieved for anatomical sites studied here, except HN which had a 5% accuracy. CBCT-based treatment planning using a patient-specific stepwise CT-density table can facilitate the evaluation of dosimetry changes resulting from variation in patient anatomy.« less
Forecasting Future Sea Ice Conditions in the MIZ: A Lagrangian Approach
2013-09-30
www.mcgill.ca/meteo/people/tremblay LONG-TERM GOALS 1- Determine the source regions for sea ice in the seasonally ice-covered zones (SIZs...distribution of sea ice cover and transport pathways. 2- Improve our understanding of the strengths and/or limitations of GCM predictions of future...ocean currents, RGPS sea ice deformation, Reanalysis surface wind , surface radiative fluxes, etc. Processing the large datasets involved is a tedious
Double-moment cloud microphysics scheme for the deep convection parameterization in the GFDL AM3
NASA Astrophysics Data System (ADS)
Belochitski, A.; Donner, L.
2014-12-01
A double-moment cloud microphysical scheme originally developed by Morrision and Gettelman (2008) for the stratiform clouds and later adopted for the deep convection by Song and Zhang (2011) has been implemented in to the Geophysical Fluid Dynamics Laboratory's atmospheric general circulation model AM3. The scheme treats cloud drop, cloud ice, rain, and snow number concentrations and mixing ratios as diagnostic variables and incorporates processes of autoconversion, self-collection, collection between hydrometeor species, sedimentation, ice nucleation, drop activation, homogeneous and heterogeneous freezing, and the Bergeron-Findeisen process. Such detailed representation of microphysical processes makes the scheme suitable for studying the interactions between aerosols and convection, as well as aerosols' indirect effects on clouds and their roles in climate change. The scheme is first tested in the single column version of the GFDL AM3 using forcing data obtained at the U.S. Department of Energy Atmospheric Radiation Measurment project's Southern Great Planes site. Scheme's impact on SCM simulations is discussed. As the next step, runs of the full atmospheric GCM incorporating the new parameterization are compared to the unmodified version of GFDL AM3. Global climatological fields and their variability are contrasted with those of the original version of the GCM. Impact on cloud radiative forcing and climate sensitivity is investigated.
X-ray Scattering Measurement of the Heat Capacity Ratio in Shock Compressed Matter
NASA Astrophysics Data System (ADS)
Fortmann, C.; Lee, H. J.; Doeppner, Tilo; Kritcher, A. L.; Landen, O. L.; Falcone, R. W.; Glenzer, S. H.
2011-10-01
We developed accurate x-ray scattering techniques to measure properties of matter under extreme conditions of density and temperature in intense laser-solid interaction experiments. We report on novel applications of x-ray scattering to measure the heat-capacity ratio γ =cp /cv of a Be plasma which determines the equation of state of the system. Ultraintense laser radiation is focussed onto both sides of a Be foil, creating two counterpropagating planar shock waves that collide in the target center. A second set of lasers produces Zn He- α radiation of 8.9 keV energy that scatters from the shock-compressed matter. We observe temperatures of 10eV and 15eV and mass densities of 5g/cm3 and 11g/cm3 before and after the shock collision. Applying the Rankine-Hugoniot relations for counterpropagating shocks we then infer γ as a function of density using only the measured mass compression ratios. Our results agree with equation of state models and DFT simulations. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. We acknowledge support from the Alexander von Humboldt-Foundation.
NASA Astrophysics Data System (ADS)
Peel, M. C.; Srikanthan, R.; McMahon, T. A.; Karoly, D. J.
2015-04-01
Two key sources of uncertainty in projections of future runoff for climate change impact assessments are uncertainty between global climate models (GCMs) and within a GCM. Within-GCM uncertainty is the variability in GCM output that occurs when running a scenario multiple times but each run has slightly different, but equally plausible, initial conditions. The limited number of runs available for each GCM and scenario combination within the Coupled Model Intercomparison Project phase 3 (CMIP3) and phase 5 (CMIP5) data sets, limits the assessment of within-GCM uncertainty. In this second of two companion papers, the primary aim is to present a proof-of-concept approximation of within-GCM uncertainty for monthly precipitation and temperature projections and to assess the impact of within-GCM uncertainty on modelled runoff for climate change impact assessments. A secondary aim is to assess the impact of between-GCM uncertainty on modelled runoff. Here we approximate within-GCM uncertainty by developing non-stationary stochastic replicates of GCM monthly precipitation and temperature data. These replicates are input to an off-line hydrologic model to assess the impact of within-GCM uncertainty on projected annual runoff and reservoir yield. We adopt stochastic replicates of available GCM runs to approximate within-GCM uncertainty because large ensembles, hundreds of runs, for a given GCM and scenario are unavailable, other than the Climateprediction.net data set for the Hadley Centre GCM. To date within-GCM uncertainty has received little attention in the hydrologic climate change impact literature and this analysis provides an approximation of the uncertainty in projected runoff, and reservoir yield, due to within- and between-GCM uncertainty of precipitation and temperature projections. In the companion paper, McMahon et al. (2015) sought to reduce between-GCM uncertainty by removing poorly performing GCMs, resulting in a selection of five better performing GCMs from CMIP3 for use in this paper. Here we present within- and between-GCM uncertainty results in mean annual precipitation (MAP), mean annual temperature (MAT), mean annual runoff (MAR), the standard deviation of annual precipitation (SDP), standard deviation of runoff (SDR) and reservoir yield for five CMIP3 GCMs at 17 worldwide catchments. Based on 100 stochastic replicates of each GCM run at each catchment, within-GCM uncertainty was assessed in relative form as the standard deviation expressed as a percentage of the mean of the 100 replicate values of each variable. The average relative within-GCM uncertainties from the 17 catchments and 5 GCMs for 2015-2044 (A1B) were MAP 4.2%, SDP 14.2%, MAT 0.7%, MAR 10.1% and SDR 17.6%. The Gould-Dincer Gamma (G-DG) procedure was applied to each annual runoff time series for hypothetical reservoir capacities of 1 × MAR and 3 × MAR and the average uncertainties in reservoir yield due to within-GCM uncertainty from the 17 catchments and 5 GCMs were 25.1% (1 × MAR) and 11.9% (3 × MAR). Our approximation of within-GCM uncertainty is expected to be an underestimate due to not replicating the GCM trend. However, our results indicate that within-GCM uncertainty is important when interpreting climate change impact assessments. Approximately 95% of values of MAP, SDP, MAT, MAR, SDR and reservoir yield from 1 × MAR or 3 × MAR capacity reservoirs are expected to fall within twice their respective relative uncertainty (standard deviation/mean). Within-GCM uncertainty has significant implications for interpreting climate change impact assessments that report future changes within our range of uncertainty for a given variable - these projected changes may be due solely to within-GCM uncertainty. Since within-GCM variability is amplified from precipitation to runoff and then to reservoir yield, climate change impact assessments that do not take into account within-GCM uncertainty risk providing water resources management decision makers with a sense of certainty that is unjustified.
Optimal shielding thickness for galactic cosmic ray environments
NASA Astrophysics Data System (ADS)
Slaba, Tony C.; Bahadori, Amir A.; Reddell, Brandon D.; Singleterry, Robert C.; Clowdsley, Martha S.; Blattnig, Steve R.
2017-02-01
Models have been extensively used in the past to evaluate and develop material optimization and shield design strategies for astronauts exposed to galactic cosmic rays (GCR) on long duration missions. A persistent conclusion from many of these studies was that passive shielding strategies are inefficient at reducing astronaut exposure levels and the mass required to significantly reduce the exposure is infeasible, given launch and associated cost constraints. An important assumption of this paradigm is that adding shielding mass does not substantially increase astronaut exposure levels. Recent studies with HZETRN have suggested, however, that dose equivalent values actually increase beyond ∼20 g/cm2 of aluminum shielding, primarily as a result of neutron build-up in the shielding geometry. In this work, various Monte Carlo (MC) codes and 3DHZETRN are evaluated in slab geometry to verify the existence of a local minimum in the dose equivalent versus aluminum thickness curve near 20 g/cm2. The same codes are also evaluated in polyethylene shielding, where no local minimum is observed, to provide a comparison between the two materials. Results are presented so that the physical interactions driving build-up in dose equivalent values can be easily observed and explained. Variation of transport model results for light ions (Z ≤ 2) and neutron-induced target fragments, which contribute significantly to dose equivalent for thick shielding, is also highlighted and indicates that significant uncertainties are still present in the models for some particles. The 3DHZETRN code is then further evaluated over a range of related slab geometries to draw closer connection to more realistic scenarios. Future work will examine these related geometries in more detail.
Optimal shielding thickness for galactic cosmic ray environments.
Slaba, Tony C; Bahadori, Amir A; Reddell, Brandon D; Singleterry, Robert C; Clowdsley, Martha S; Blattnig, Steve R
2017-02-01
Models have been extensively used in the past to evaluate and develop material optimization and shield design strategies for astronauts exposed to galactic cosmic rays (GCR) on long duration missions. A persistent conclusion from many of these studies was that passive shielding strategies are inefficient at reducing astronaut exposure levels and the mass required to significantly reduce the exposure is infeasible, given launch and associated cost constraints. An important assumption of this paradigm is that adding shielding mass does not substantially increase astronaut exposure levels. Recent studies with HZETRN have suggested, however, that dose equivalent values actually increase beyond ∼20g/cm 2 of aluminum shielding, primarily as a result of neutron build-up in the shielding geometry. In this work, various Monte Carlo (MC) codes and 3DHZETRN are evaluated in slab geometry to verify the existence of a local minimum in the dose equivalent versus aluminum thickness curve near 20g/cm 2 . The same codes are also evaluated in polyethylene shielding, where no local minimum is observed, to provide a comparison between the two materials. Results are presented so that the physical interactions driving build-up in dose equivalent values can be easily observed and explained. Variation of transport model results for light ions (Z ≤ 2) and neutron-induced target fragments, which contribute significantly to dose equivalent for thick shielding, is also highlighted and indicates that significant uncertainties are still present in the models for some particles. The 3DHZETRN code is then further evaluated over a range of related slab geometries to draw closer connection to more realistic scenarios. Future work will examine these related geometries in more detail. Published by Elsevier Ltd.
Synoptic Traveling Weather Systems on Mars: Effects of Radiatively-Active Water Ice Clouds
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, Melinda A.; Haberle, Robert; Atsuki Urata, Richard
2016-10-01
Atmospheric aerosols on Mars are critical in determining the nature of its thermal structure, its large-scale circulation, and hence the overall climate of the planet. We conduct multi-annual simulations with the latest version of the NASA Ames Mars global climate model (GCM), gcm2.3+, that includes a modernized radiative-transfer package and complex water-ice cloud microphysics package which permit radiative effects and interactions of suspended atmospheric aerosols (e.g., water ice clouds, water vapor, dust, and mutual interactions) to influence the net diabatic heating. Results indicate that radiatively active water ice clouds profoundly affect the seasonal and annual mean climate. The mean thermal structure and balanced circulation patterns are strongly modified near the surface and aloft. Warming of the subtropical atmosphere at altitude and cooling of the high latitude atmosphere at low levels takes place, which increases the mean pole-to-equator temperature contrast (i.e., "baroclinicity"). With radiatively active water ice clouds (RAC) compared to radiatively inert water ice clouds (nonRAC), significant changes in the intensity of the mean state and forced stationary Rossby modes occur, both of which affect the vigor and intensity of traveling, synoptic period weather systems. Such weather systems not only act as key agents in the transport of heat and momentum beyond the extent of the Hadley circulation, but also the transport of trace species such as water vapor, water ice-clouds, dust and others. The northern hemisphere (NH) forced Rossby waves and resultant wave train are augmented in the RAC case: the modes are more intense and the wave train is shifted equatorward. Significant changes also occur within the subtropics and tropics. The Rossby wave train sets up, combined with the traveling synoptic-period weather systems (i.e., cyclones and anticyclones), the geographic extent of storm zones (or storm tracks) within the NH. A variety of circulation features will be presented which indicate contrasts between the RAC and nonRAC cases, and which highlight key effects radiatively-active clouds have on physical and dynamical processes active in the current climate of Mars.
Synoptic Traveling Weather Systems on Mars: Effects of Radiatively-Active Water Ice Clouds
NASA Technical Reports Server (NTRS)
Hollingsworth, Jeffery; Kahre, Melinda; Haberle, Robert; Urata, Richard
2017-01-01
Atmospheric aerosols on Mars are critical in determining the nature of its thermal structure, its large-scale circulation, and hence the overall climate of the planet. We conduct multi-annual simulations with the latest version of the NASA Ames Mars global climate model (GCM), gcm2.3+, that includes a modernized radiative-transfer package and complex water-ice cloud microphysics package which permit radiative effects and interactions of suspended atmospheric aerosols (e.g., water ice clouds, water vapor, dust, and mutual interactions) to influence the net diabatic heating. Results indicate that radiatively active water ice clouds profoundly affect the seasonal and annual mean climate. The mean thermal structure and balanced circulation patterns are strongly modified near the surface and aloft. Warming of the subtropical atmosphere at altitude and cooling of the high latitude atmosphere at low levels takes place, which increases the mean pole-to-equator temperature contrast (i.e., "baroclinicity"). With radiatively active water ice clouds (RAC) compared to radiatively inert water ice clouds (nonRAC), significant changes in the intensity of the mean state and forced stationary Rossby modes occur, both of which affect the vigor and intensity of traveling, synoptic period weather systems. Such weather systems not only act as key agents in the transport of heat and momentum beyond the extent of the Hadley circulation, but also the transport of trace species such as water vapor, water ice-clouds, dust and others. The northern hemisphere (NH) forced Rossby waves and resultant wave train are augmented in the RAC case: the modes are more intense and the wave train is shifted equatorward. Significant changes also occur within the subtropics and tropics. The Rossby wave train sets up, combined with the traveling synoptic period weather systems (i.e., cyclones and anticyclones), the geographic extent of storm zones (or storm tracks) within the NH. A variety of circulation features will be presented which indicate contrasts between the RAC and nonRAC cases, and which highlight key effects radiatively-active clouds have on physical and dynamical processes active in the current climate of Mars.
Synoptic Traveling Weather Systems on Mars: Effects of Radiatively-Active Water Ice Clouds
NASA Technical Reports Server (NTRS)
Hollingsworth, Jeffery; Kahre, Melinda; Haberle, Robert; Urata, Richard
2017-01-01
Atmospheric aerosols on Mars are critical in determining the nature of its thermal structure, its large-scale circulation, and hence the overall climate of the planet. We conduct multi-annual simulations with the latest version of the NASA Ames Mars global climate model (GCM), gcm2.3+, that includes a modernized radiative-transfer package and complex water-ice cloud microphysics package which permit radiative effects and interactions of suspended atmospheric aerosols (e.g., water ice clouds, water vapor, dust, and mutual interactions) to influence the net diabatic heating. Results indicate that radiatively active water ice clouds profoundly affect the seasonal and annual mean climate. The mean thermal structure and balanced circulation patterns are strongly modified near the surface and aloft. Warming of the subtropical atmosphere at altitude and cooling of the high latitude atmosphere at low levels takes place, which increases the mean pole-to-equator temperature contrast (i.e., "baroclinicity"). With radiatively active water ice clouds (RAC) compared to radiatively inert water ice clouds (nonRAC), significant changes in the intensity of the mean state and forced stationary Rossby modes occur, both of which affect the vigor and intensity of traveling, synoptic period weather systems.Such weather systems not only act as key agents in the transport of heat and momentum beyond the extent of the Hadley circulation, but also the transport of trace species such as water vapor, water ice-clouds, dust and others. The northern hemisphere (NH) forced Rossby waves and resultant wave train are augmented in the RAC case: the modes are more intense and the wave train is shifted equatorward. Significant changes also occur within the subtropics and tropics. The Rossby wave train sets up, combined with the traveling synoptic period weather systems (i.e., cyclones and anticyclones), the geographic extent of storm zones (or storm tracks) within the NH. A variety of circulation features will be presented which indicate contrasts between the RAC and nonRAC cases, and which highlight key effects radiatively-active clouds have on physical and dynamical processes active in the current climate of Mars.
Incoming Shortwave Fluxes at the Surface--A Comparison of GCM Results with Observations.
NASA Astrophysics Data System (ADS)
Garratt, J. R.
1994-01-01
Evidence is presented that the exam surface net radiation calculated in general circulation models at continental surfaces is mostly due to excess incoming shortwave fluxes. Based on long-term observations from 22 worldwide inland stations and results from four general circulation models the overestimate in models of 20% (11 W m2) in net radiation on an annual basis compares with 6% (9 W m2) for shortwave fluxes for the same 22 locations, or 9% (18 W m2) for a larger set of 93 stations (71 having shortwave fluxes only). For annual fluxes, these differences appear to be significant.
Parallel Semi-Implicit Spectral Element Atmospheric Model
NASA Astrophysics Data System (ADS)
Fournier, A.; Thomas, S.; Loft, R.
2001-05-01
The shallow-water equations (SWE) have long been used to test atmospheric-modeling numerical methods. The SWE contain essential wave-propagation and nonlinear effects of more complete models. We present a semi-implicit (SI) improvement of the Spectral Element Atmospheric Model to solve the SWE (SEAM, Taylor et al. 1997, Fournier et al. 2000, Thomas & Loft 2000). SE methods are h-p finite element methods combining the geometric flexibility of size-h finite elements with the accuracy of degree-p spectral methods. Our work suggests that exceptional parallel-computation performance is achievable by a General-Circulation-Model (GCM) dynamical core, even at modest climate-simulation resolutions (>1o). The code derivation involves weak variational formulation of the SWE, Gauss(-Lobatto) quadrature over the collocation points, and Legendre cardinal interpolators. Appropriate weak variation yields a symmetric positive-definite Helmholtz operator. To meet the Ladyzhenskaya-Babuska-Brezzi inf-sup condition and avoid spurious modes, we use a staggered grid. The SI scheme combines leapfrog and Crank-Nicholson schemes for the nonlinear and linear terms respectively. The localization of operations to elements ideally fits the method to cache-based microprocessor computer architectures --derivatives are computed as collections of small (8x8), naturally cache-blocked matrix-vector products. SEAM also has desirable boundary-exchange communication, like finite-difference models. Timings on on the IBM SP and Compaq ES40 supercomputers indicate that the SI code (20-min timestep) requires 1/3 the CPU time of the explicit code (2-min timestep) for T42 resolutions. Both codes scale nearly linearly out to 400 processors. We achieved single-processor performance up to 30% of peak for both codes on the 375-MHz IBM Power-3 processors. Fast computation and linear scaling lead to a useful climate-simulation dycore only if enough model time is computed per unit wall-clock time. An efficient SI solver is essential to substantially increase this rate. Parallel preconditioning for an iterative conjugate-gradient elliptic solver is described. We are building a GCM dycore capable of 200 GF% lOPS sustained performance on clustered RISC/cache architectures using hybrid MPI/OpenMP programming.
NASA Astrophysics Data System (ADS)
Largent, Billy T.
The state of matter at extremely high pressures and densities is of fundamental interest to many branches of research, including planetary science, material science, condensed matter physics, and plasma physics. Matter with pressures, or energy densities, above 1 megabar (100 gigapascal) are defined as High Energy Density (HED) plasmas. They are directly relevant to the interiors of planets such as Earth and Jupiter and to the dense fuels in Inertial Confinement Fusion (ICF) experiments. To create HEDP conditions in laboratories, a sample may be compressed by a smoothly varying pressure ramp with minimal temperature increase, following the isentropic thermodynamic process. Isentropic compression of aluminum targets has been done using magnetic pressure produced by megaampere, pulsed power currents having 100 ns rise times. In this research project, magnetically driven, cylindrical isentropic compression has been numerically studied. In cylindrical geometry, material compression and pressure become higher than in planar geometry due to geometrical effects. Based on a semi-analytical model for the Magnetized Liner Inertial Fusion (MagLIF) concept, a code called "SA" was written to design cylindrical compression experiments on the 1.0 MA Zebra pulsed power generator at the Nevada Terawatt Facility (NTF). To test the physics models in the code, temporal progresses of rod compression and pressure were calculated with SA and compared with 1-D magnetohydrodynamic (MHD) codes. The MHD codes incorporated SESAME tables, for equation of state and resistivity, or the classical Spitzer model. A series of simulations were also run to find optimum rod diameters for 1.0 MA and 1.8 MA Zebra current pulses. For a 1.0 MA current peak and 95 ns rise time, a maximum compression of 2.35 ( 6.3 g/cm3) and a pressure of 900 GPa within a 100 mum radius were found for an initial diameter of 1.05 mm. For 1.8 MA peak simulations with the same rise time, the initial diameter of 1.3 mm was optimal with 3.32 ( 9.0 g/cm 3) compression.
GCM Studies on the Interactions Between Photosynthesis and Climate at Diurnal to Decadal Time Scales
NASA Technical Reports Server (NTRS)
Collatz, G. James; Bounoua, Lahouari; Sellers, Piers; Los, Sietse; Randall, David; Berry, Joseph; Tucker, Compton J.
1998-01-01
Transpiration, a major component of total evaporation from vegetated surfaces, is an unavoidable consequence of photosynthetic carbon fixation. Because of limiting soil moisture and competition for solar radiation plants invest most of their fixed carbon into structural and hydraulic functions (roots and stems) and solar radiation absorption (leaves). These investments permit individuals to overshadow competitors and provide for transport of water from the soil to the leaves where photosynthesis and transpiration occur. Often low soil moisture or high evaporative demand limit the supply of water to leaves reducing photosynthesis and thus transpiration. The absorption of solar radiation for photosynthesis and dissipation of this energy via radiation, heat, mass and momentum fluxes represents the link between photosynthesis and climate. Recognition of these relationships has led to the development of hydro/energy balance models that are based on the physiological ecology of photosynthesis. We discuss an approach to study vegetation-climate interactions using photosynthesis-centric models embedded in a GCM. The rate at which a vegetated area transpires and photosynthesizes is determined by the physiological state of the vegetation, its amount and its type. The latter two are specified from global satellite data collected since 1982. Climate simulations have been carried out to study how this simulated climate system responds to changes in radiative forcing, physiological capacity, atmospheric CO2, vegetation type and variable vegetation cover observed from satellites during the 1980's. Results from these studies reveal significant feedbacks between the vegetation activity and climate. For example, vegetation cover and physiological activity increases cause the total latent heat flux and precipitation to increase while mean and maximum air temperatures decrease. The reverse occurs if cover or activity'decreases. In general climate response of a particular region was dominated by local processes but we also find evidence that plausible climate-vegetation scenarios lead to changes in global atmospheric circulation and strong non-local influences in some cases.
Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
NASA Technical Reports Server (NTRS)
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
Description of Transport Codes for Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.; Cucinotta, Francis A.
2011-01-01
This slide presentation describes transport codes and their use for studying and designing space radiation shielding. When combined with risk projection models radiation transport codes serve as the main tool for study radiation and designing shielding. There are three criteria for assessing the accuracy of transport codes: (1) Ground-based studies with defined beams and material layouts, (2) Inter-comparison of transport code results for matched boundary conditions and (3) Comparisons to flight measurements. These three criteria have a very high degree with NASA's HZETRN/QMSFRG.
NASA Astrophysics Data System (ADS)
Wild, M.; Hakuba, M. Z.; Folini, D.; Ott, P.; Long, C. N.
2017-12-01
Clear sky fluxes in the latest generation of Global Climate Models (GCM) from CMIP5 still vary largely particularly at the Earth's surface, covering in their global means a range of 16 and 24 Wm-2 in the surface downward clear sky shortwave (SW) and longwave radiation, respectively. We assess these fluxes with monthly clear sky reference climatologies derived from more than 40 Baseline Surface Radiation Network (BSRN) sites based on Long and Ackermann (2000) and Hakuba et al. (2015). The comparison is complicated by the fact that the monthly SW clear sky BSRN reference climatologies are inferred from measurements under true cloud-free conditions, whereas the GCM clear sky fluxes are calculated continuously at every timestep solely by removing the clouds, yet otherwise keeping the prevailing atmospheric composition (e.g. water vapor, temperature, aerosols) during the cloudy conditions. This induces the risk of biases in the GCMs just due to the additional sampling of clear sky fluxes calculated under atmospheric conditions representative for cloudy situations. Thereby, a wet bias may be expected in the GCMs compared to the observational references, which may induce spurious low biases in the downward clear sky SW fluxes. To estimate the magnitude of these spurious biases in the available monthly mean fields from 40 CMIP5 models, we used their respective multi-century control runs, and searched therein for each month and each BSRN station the month with the lowest cloud cover. The deviations of the clear sky fluxes in this month from their long-term means have then be used as indicators of the magnitude of the abovementioned sampling biases and as correction factors for an appropriate comparison with the BSRN climatologies, individually applied for each model and BSRN site. The overall correction is on the order of 2 Wm-2. This revises our best estimate for the global mean surface downward SW clear sky radiation, previously at 249 Wm-2 infered from the GCM clear sky flux fields and their biases compared to the BSRN climatologies, now to 247 Wm-2 including this additional correction. 34 out of 40 CMIP5 GCMs exceed this reference value. With a global mean surface albedo of 13 % and net TOA SW clear sky flux of 287 Wm-2 from CERES-EBAF this results in a global mean clear sky surface and atmospheric SW absorption of 214 and 73 Wm-2, respectively.
Los Alamos radiation transport code system on desktop computing platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less
NASA Technical Reports Server (NTRS)
Salazar, Giovanni; Droba, Justin C.; Oliver, Brandon; Amar, Adam J.
2016-01-01
With the recent development of multi-dimensional thermal protection system (TPS) material response codes including the capabilities to account for radiative heating is a requirement. This paper presents the recent efforts to implement such capabilities in the CHarring Ablator Response (CHAR) code developed at NASA's Johnson Space Center. This work also describes the different numerical methods implemented in the code to compute view factors for radiation problems involving multiple surfaces. Furthermore, verification and validation of the code's radiation capabilities are demonstrated by comparing solutions to analytical results, to other codes, and to radiant test data.
NASA Astrophysics Data System (ADS)
Lee, W. L.; Liou, K. N.; Gu, Y.; Wang, C. C.; Wu, C. H.; Hsu, H. H.
2017-12-01
We have develop a parameterization to quantify the effect of 3-D topography on surface solar radiation, including multiple reflection and heating difference at sunward and shaded slopes of mountains. A series of sensitivity tests using NCAR CCSM4 with and without this parameterization have been carried out to investigate this effect in climate simulations. The result indicates that missing the 3-D radiation-topography interaction could be a key factor leading to cold biases over the Tibetan Plateau in winter in all of the CMIP5 models. Consequently, the snowmelt rate in the Tibetan Plateau could be underestimated in most future projections. In addition, the topographic effect can also increase the net surface solar radiation at the southern slope of the Himalayas in summer. The temporal and spatial distribution of monsoon precipitation and circulation could also be influenced.
Effects of Cloud-Microphysics on Tropical Atmospheric Hydrologic Processes in the GEOS GCM
NASA Technical Reports Server (NTRS)
Lau, K. M.; Wu, H. T.; Sud, Y. C.; Walker, G. K.
2004-01-01
The sensitivity of tropical atmospheric hydrologic processes to cloud-microphysics is investigated using the NASA GEOS GCM. Results show that a faster autoconversion - rate produces more warm rain and less clouds at all levels. Fewer clouds enhances longwave cooling and reduces shortwave heating in the upper troposphere, while more warm rain produces increased condensation heating in the lower troposphere. This vertical heating differential destablizes the tropical atmosphere, producing a positive feedback resulting in more rain over the tropics. The feedback is maintained via a two-cell secondary circulation. The lower cell is capped by horizontal divergence and maximum cloud detrainment near the melting/freezing, with rising motion in the warm rain region connected to descending motion in the cold rain region. The upper cell is found above the freezing/melting level, with longwave-induced subsidence in the warm rain and dry regions, coupled to forced ascent in the deep convection region. The tropical large scale circulation is found to be very sensitive to the radiative-dynamic effects induced by changes in autoconversion rate. Reduced cloud-radiation processes feedback due to a faster autoconversion rate results in intermittent but more energetic eastward propagating Madden and Julian Oscillations (MJO). Conversely,-a slower autconversion rate, with increased cloud radiation produces MJO's with more realistic westward propagating transients, resembling a supercloud cluster structure. Results suggests that warm rain and associated low and mid level clouds, i.e., cumulus congestus, may play a critical role in regulating the time-intervals of deep convections and hence the fundamental time scales of the MJO.
2009-07-05
proton PARMA PHITS -based Analytical Radiation Model in the Atmosphere PCAIRE Predictive Code for Aircrew Radiation Exposure PHITS Particle and Heavy...transport code utilized is called PARMA ( PHITS based Analytical Radiation Model in the Atmosphere) [36]. The particle fluxes calculated from the input...dose equivalent coefficient regulations from the ICRP-60 regulations. As a result, the transport codes utilized by EXPACS ( PHITS ) and CARI-6 (PARMA
NASA Astrophysics Data System (ADS)
Lamer, K.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.; Clothiaux, E. E.
2017-12-01
An important aspect of evaluating Artic cloud representation in a general circulation model (GCM) consists of using observational benchmarks which are as equivalent as possible to model output in order to avoid methodological bias and focus on correctly diagnosing model dynamical and microphysical misrepresentations. However, current cloud observing systems are known to suffer from biases such as limited sensitivity, and stronger response to large or small hydrometeors. Fortunately, while these observational biases cannot be corrected, they are often well understood and can be reproduced in forward simulations. Here a ground-based millimeter wavelength Doppler radar and micropulse lidar forward simulator able to interface with output from the Goddard Institute for Space Studies (GISS) ModelE GCM is presented. ModelE stratiform hydrometeor fraction, mixing ratio, mass-weighted fall speed and effective radius are forward simulated to vertically-resolved profiles of radar reflectivity, Doppler velocity and spectrum width as well as lidar backscatter and depolarization ratio. These forward simulated fields are then compared to Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) ground-based observations to assess cloud vertical structure (CVS). Model evalution of Arctic mixed-phase cloud would also benefit from hydrometeor phase evaluation. While phase retrieval from synergetic observations often generates large uncertainties, the same retrieval algorithm can be applied to observed and forward-simulated radar-lidar fields, thereby producing retrieved hydrometeor properties with potentially the same uncertainties. Comparing hydrometeor properties retrieved in exactly the same way aims to produce the best apples-to-apples comparisons between GCM ouputs and observations. The use of a comprenhensive ground-based forward simulator coupled with a hydrometeor classification retrieval algorithm provides a new perspective for GCM evaluation of Arctic mixed-phase clouds from the ground where low-level supercooled liquid layer are more easily observed and where additional environmental properties such as cloud condensation nuclei are quantified. This should help assist in choosing between several possible diagnostic ice nucleation schemes for ModelE stratiform cloud.
NASA Astrophysics Data System (ADS)
Montes, C.; Kiang, N. Y.; Yang, W.; Ni-Meister, W.; Schaaf, C.; Aleinov, I. D.; Jonas, J.; Zhao, F. A.; Yao, T.; Wang, Z.; Sun, Q.
2015-12-01
Processes determining biosphere-atmosphere coupling are strongly influenced by vegetation structure. Thus, ecosystem carbon sequestration and evapotranspiration affecting global carbon and water balances will depend upon the spatial extent of vegetation, its vertical structure, and its physiological variability. To represent this globally, Dynamic Global Vegetation Models (DGVMs) coupled to General Circulation Models (GCMs) make use of satellite and/or model-based vegetation classifications often composed by homogeneous communities. This work aims at developing a new Global Vegetation Structure Dataset (GVSD) by incorporating varying vegetation heights for mixed plant communities to be used as input to the Ent Terrestrial Biosphere Model (TBM), the DGVM coupled to the NASA Goddard Institute for Space Studies (GISS) GCM. Information sources include the Moderate Resolution Imaging Spectroradiometer (MODIS) land cover and plant functional types (PFTs) (Friedl et al., 2010), vegetation height from the Geoscience Laser Altimeter System (GLAS) on board ICESat (Ice, Cloud, and land Elevation Satellite) (Simard et al., 2011; Tang et al., 2014) along with the Global Data Sets of Vegetation Leaf Area Index (LAI)3g (Zhu et al. 2013). Further PFT partitioning is performed according to a climate classification utilizing the Climate Research Unit (CRU) and the NOAA Global Precipitation Climatology Centre (GPCC) data. Final products are a GVSD consisting of mixed plant communities (e.g. mixed forests, savannas, mixed PFTs) following the Ecosystem Demography model (Moorcroft et al., 2001) approach represented by multi-cohort community patches at the sub-grid level of the GCM, which are ensembles of identical individuals whose differences are represented by PFTs, canopy height, density and vegetation structure sensitivity to allometric parameters. To assess the sensitivity of the GISS GCM to vegetation structure, we produce a range of estimates of Ent TBM biomass and plant densities by varying allometric specifications. Ultimately, this GVSD will serve as a template for community data sets, and be used as boundary conditions to the Ent TBM for prediction of canopy albedo in the Analytical Clumped Two-Stream canopy radiative transfer scheme, biomass, primary productivity, respiration, and GISS GCM climate.
El Nino-southern oscillation simulated in an MRI atmosphere-ocean coupled general circulation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagai, T.; Tokioka, T.; Endoh, M.
A coupled atmosphere-ocean general circulation model (GCM) was time integrated for 30 years to study interannual variability in the tropics. The atmospheric component is a global GCM with 5 levels in the vertical and 4[degrees]latitude X 5[degrees] longitude grids in the horizontal including standard physical processes (e.g., interactive clouds). The oceanic component is a GCM for the Pacific with 19 levels in the vertical and 1[degrees]x 2.5[degrees] grids in the horizontal including seasonal varying solar radiation as forcing. The model succeeded in reproducing interannual variations that resemble the El Nino-Southern Oscillation (ENSO) with realistic seasonal variations in the atmospheric andmore » oceanic fields. The model ENSO cycle has a time scale of approximately 5 years and the model El Nino (warm) events are locked roughly in phase to the seasonal cycle. The cold events, however, are less evident in comparison with the El Nino events. The time scale of the model ENSO cycle is determined by propagation time of signals from the central-eastern Pacific to the western Pacific and back to the eastern Pacific. Seasonal timing is also important in the ENSO time scale: wind anomalies in the central-eastern Pacific occur in summer and the atmosphere ocean coupling in the western Pacific operates efficiently in the first half of the year.« less
Spectral Generation from the Ames Mars GCM for the Study of Martian Clouds
NASA Astrophysics Data System (ADS)
Klassen, David R.; Kahre, Melinda A.; Wolff, Michael J.; Haberle, Robert; Hollingsworth, Jeffery L.
2017-10-01
Studies of martian clouds come from two distinct groups of researchers: those modeling the martian system from first principles and those observing Mars from ground-based and orbital platforms. The model-view begins with global circulation models (GCMs) or mesoscale models to track a multitude of state variables over a prescribed set of spatial and temporal resolutions. The state variables can then be processed into distinct maps of derived product variables, such as integrated optical depth of aerosol (e.g., water ice cloud, dust) or column integrated water vapor for comparison to observational results. The observer view begins, typically, with spectral images or imaging spectra, calibrated to some form of absolute units then run through some form of radiative transfer model to also produce distinct maps of derived product variables. Both groups of researchers work to adjust model parameters and assumptions until some level of agreement in derived product variables is achieved. While this system appears to work well, it is in some sense only an implicit confirmation of the model assumptions that attribute to the work from both sides. We have begun a project of testing the NASA Ames Mars GCM and key aerosol model assumptions more directly by taking the model output and creating synthetic TES-spectra from them for comparison to actual raw-reduced TES spectra. We will present some preliminary generated GCM spectra and TES comparisons.
NASA Astrophysics Data System (ADS)
Yang, P.; Ding, J.; Tang, G.; King, M. D.; Platnick, S. E.; Meyer, K.; Mlawer, E. J.
2017-12-01
Van de Hulst (1974) showed several quasi-invariant quantities in radiative transfer concerning multiple scattering. Recently, we illustrated that the aforesaid quasi-invariant quantities are useful in remote sensing of ice cloud properties from spaceborne radiometric observations (Ding et al. 2017). Specifically, the overall performance of an ice cloud optical property model can be estimated without carrying out detailed retrieval implementation. In this presentation, we will review the radiative transfer similarity relations and some recent results including the study by Ding et al. (2017). Furthermore, we will illustrate an application of the similarity relations to improvement of broadband radiative flux computation. For example, the Rapid Radiative Transfer Model (RRTM, Mlawer et al, 1999) does not consider multiple scattering in the longwave spectral regime (RRTMG-LW) ("G" indicates a version suitable for GCM applications). We show that the similarity relations can be used to effectively improve the accuracy of RRTMG-LW without increasing computational effort.
Slantwise convection on fluid planets: Interpreting convective adjustment from Juno observations
NASA Astrophysics Data System (ADS)
O'Neill, M. E.; Kaspi, Y.; Galanti, E.
2016-12-01
NASA's Juno mission provides unprecedented microwave measurements that pierce Jupiter's weather layer and image the transition to an adiabatic fluid below. This region is expected to be highly turbulent and complex, but to date most models use the moist-to-dry transition as a simple boundary. We present simple theoretical arguments and GCM results to argue that columnar convection is important even in the relatively thin boundary layer, particularly in the equatorial region. We first demonstrate how surface cooling can lead to very horizontal parcel paths, using a simple parcel model. Next we show the impact of this horizontal motion on angular momentum flux in a high-resolution Jovian model. The GCM is a state-of-the-art modification of the MITgcm, with deep geometry, compressibility and interactive two-stream radiation. We show that slantwise convection primarily mixes fluid along columnar surfaces of angular momentum, and discuss the impacts this should have on lapse rate interpretation of both the Galileo probe sounding and the Juno microwave observations.
Nonlinear dynamics of global atmospheric and Earth system processes
NASA Technical Reports Server (NTRS)
Saltzman, Barry
1993-01-01
During the past eight years, we have been engaged in a NASA-supported program of research aimed at establishing the connection between satellite signatures of the earth's environmental state and the nonlinear dynamics of the global weather and climate system. Thirty-five publications and four theses have resulted from this work, which included contributions in five main areas of study: (1) cloud and latent heat processes in finite-amplitude baroclinic waves; (2) application of satellite radiation data in global weather analysis; (3) studies of planetary waves and low-frequency weather variability; (4) GCM studies of the atmospheric response to variable boundary conditions measurable from satellites; and (5) dynamics of long-term earth system changes. Significant accomplishments from the three main lines of investigation pursued during the past year are presented and include the following: (1) planetary atmospheric waves and low frequency variability; (2) GCM studies of the atmospheric response to changed boundary conditions; and (3) dynamics of long-term changes in the global earth system.
Effects of Absorbing Aerosols on Accelerated Melting of Snowpack in the Tibetan-Himalayas Region
NASA Technical Reports Server (NTRS)
Lau, William K. M.
2011-01-01
The impacts of absorbing aerosol on melting of snowpack in the Hindu-Kush-Tibetan-Himalayas (HKTH) region are studied using NASA satellite and GEOS-5 GCM. Results from GCM experiments shows that a 8-10% in the rate of melting of snowpack over the western Himalayas and Tibetan Plateau can be attributed to the aerosol elevated-heat-pump (EHP) feedback effect (Lau et al. 2008), initiated by the absorption of solar radiation by absorbing aerosols accumulated over the Indo-Gangetic Plain and Himalayas foothills. On the other hand, deposition of black carbon on snow surface was estimated to give rise to a reduction in snow surface albedo of 2 - 5%, and an increased annual runoff of 9-24%. From case studies using satellite observations and re-analysis data, we find consistent signals of possible impacts of dust and black carbon aerosol in blackening snow surface, in accelerating spring melting of snowpack in the HKHT, and consequentially in influencing shifts in long-term Asian summer monsoon rainfall pattern.
Crystal Growth and Scintillation Properties of Eu2+ doped Cs4CaI6 and Cs4SrI6
NASA Astrophysics Data System (ADS)
Stand, L.; Zhuravleva, M.; Chakoumakos, B.; Johnson, J.; Loyd, M.; Wu, Y.; Koschan, M.; Melcher, C. L.
2018-03-01
In this work we present the crystal growth and scintillation properties of two new ternarymetal halide scintillators activated with divalent europium, Cs4CaI6 and Cs4SrI6. Single crystals of each compound were grown in evacuated quartz ampoules via the vertical Bridgman technique using a two-zone transparent furnace. Single crystal X-ray diffraction experiments showed that both crystals have a trigonal (R-3c) structure, with a density of 3.99 g/cm3 and 4.03 g/cm3. The radioluminescence and photoluminescence measurements showed typical luminescence properties due to the 5d-4f radiative transitions in Eu2+. At this early stage of development Cs4SrI6:Eu and Cs4CaI6:Eu have shown very promising scintillation properties, with light yields and energy resolutions of 62,300 ph/MeV and 3.3%, and 51,800 photons/MeV and 3.6% at 662 keV, respectively.
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.
1998-01-01
A prognostic cloud scheme named McRAS (Microphysics of clouds with Relaxed Arakawa-Schubert Scheme) was developed with the aim of improving cloud-microphysics, and cloud-radiation interactions in GCMs. McRAS distinguishes convective, stratiform, and boundary-layer clouds. The convective clouds merge into stratiform clouds on an hourly time-scale, while the boundary-layer clouds do so instantly. The cloud condensate transforms into precipitation following the auto-conversion relations of Sundqvist that contain a parametric adaptation for the Bergeron-Findeisen process of ice crystal growth and collection of cloud condensate by precipitation. All clouds convect, advect, and diffuse both horizontally and vertically with a fully active cloud-microphysics throughout its life-cycle, while the optical properties of clouds are derived from the statistical distribution of hydrometeors and idealized cloud geometry. An evaluation of McRAS in a single column model (SCM) with the GATE Phase III data has shown that McRAS can simulate the observed temperature, humidity, and precipitation without discernible systematic errors. An evaluation with the ARM-CART SCM data in a cloud model intercomparison exercise shows reasonable but not an outstanding accurate simulation. Such a discrepancy is common to almost all models and is related, in part, to the input data quality. McRAS was implemented in the GEOS II GCM. A 50 month integration that was initialized with the ECMWF analysis of observations for January 1, 1987 and forced with the observed sea-surface temperatures and sea-ice distribution and vegetation properties (biomes, and soils), with prognostic soil moisture, snow-cover, and hydrology showed a very realistic simulation of cloud process, incloud water and ice, and cloud-radiative forcing (CRF). The simulated ITCZ showed a realistic time-mean structure and seasonal cycle, while the simulated CRF showed sensitivity to vertical distribution of cloud water which can be easily altered by the choice of time constant and incloud critical cloud water amount regulators for auto-conversion. The CRF and its feedbacks also have a profound effect on the ITCZ. Even though somewhat weaker than observed, the McRAS-GCM simulation produces robust 30-60 day oscillations in the 200 hPa velocity potential. Two ensembles of 4-summer (July, August, September) simulations, one each for 1987 and 1988 show that the McRAS-GCM simulates realistic and statistically significant precipitation differences over India, Central America, and tropical Africa. Several seasonal simulations were performed with McRAS-GEOS II GCM for the summer (June-July- August) and winter (December-January-February) periods to determine how the simulated clouds and CRFs would be affected by: i) advection of clouds; ii) cloud top entrainment instability, iii) cloud water inhomogeneity correction, and (iv) cloud production and dissipation in different cloud-processes. The results show that each of these processes contributes to the simulated cloud-fraction and CRF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru
We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less
Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM
NASA Technical Reports Server (NTRS)
Yao, Mao-Sung; Cheng, Ye
2013-01-01
The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.
DREAM Mediated Regulation of GCM1 in the Human Placental Trophoblast
Baczyk, Dora; Kibschull, Mark; Mellstrom, Britt; Levytska, Khrystyna; Rivas, Marcos; Drewlo, Sascha; Lye, Stephen J.; Naranjo, Jose R.; Kingdom, John C. P.
2013-01-01
The trophoblast transcription factor glial cell missing-1 (GCM1) regulates differentiation of placental cytotrophoblasts into the syncytiotrophoblast layer in contact with maternal blood. Reduced placental expression of GCM1 and abnormal syncytiotrophoblast structure are features of hypertensive disorder of pregnancy – preeclampsia. In-silico techniques identified the calcium-regulated transcriptional repressor – DREAM (Downstream Regulatory Element Antagonist Modulator) - as a candidate for GCM1 gene expression. Our objective was to determine if DREAM represses GCM1 regulated syncytiotrophoblast formation. EMSA and ChIP assays revealed a direct interaction between DREAM and the GCM1 promoter. siRNA-mediated DREAM silencing in cell culture and placental explant models significantly up-regulated GCM1 expression and reduced cytotrophoblast proliferation. DREAM calcium dependency was verified using ionomycin. Furthermore, the increased DREAM protein expression in preeclamptic placental villi was predominantly nuclear, coinciding with an overall increase in sumolylated DREAM and correlating inversely with GCM1 levels. In conclusion, our data reveal a calcium-regulated pathway whereby GCM1-directed villous trophoblast differentiation is repressed by DREAM. This pathway may be relevant to disease prevention via calcium-supplementation. PMID:23300953
A new dynamical downscaling approach with GCM bias corrections and spectral nudging
NASA Astrophysics Data System (ADS)
Xu, Zhongfeng; Yang, Zong-Liang
2015-04-01
To improve confidence in regional projections of future climate, a new dynamical downscaling (NDD) approach with both general circulation model (GCM) bias corrections and spectral nudging is developed and assessed over North America. GCM biases are corrected by adjusting GCM climatological means and variances based on reanalysis data before the GCM output is used to drive a regional climate model (RCM). Spectral nudging is also applied to constrain RCM-based biases. Three sets of RCM experiments are integrated over a 31 year period. In the first set of experiments, the model configurations are identical except that the initial and lateral boundary conditions are derived from either the original GCM output, the bias-corrected GCM output, or the reanalysis data. The second set of experiments is the same as the first set except spectral nudging is applied. The third set of experiments includes two sensitivity runs with both GCM bias corrections and nudging where the nudging strength is progressively reduced. All RCM simulations are assessed against North American Regional Reanalysis. The results show that NDD significantly improves the downscaled mean climate and climate variability relative to other GCM-driven RCM downscaling approach in terms of climatological mean air temperature, geopotential height, wind vectors, and surface air temperature variability. In the NDD approach, spectral nudging introduces the effects of GCM bias corrections throughout the RCM domain rather than just limiting them to the initial and lateral boundary conditions, thereby minimizing climate drifts resulting from both the GCM and RCM biases.
NASA Astrophysics Data System (ADS)
Menzel, R.; Paynter, D.; Jones, A. L.
2017-12-01
Due to their relatively low computational cost, radiative transfer models in global climate models (GCMs) run on traditional CPU architectures generally consist of shortwave and longwave parameterizations over a small number of wavelength bands. With the rise of newer GPU and MIC architectures, however, the performance of high resolution line-by-line radiative transfer models may soon approach those of the physical parameterizations currently employed in GCMs. Here we present an analysis of the current performance of a new line-by-line radiative transfer model currently under development at GFDL. Although originally designed to specifically exploit GPU architectures through the use of CUDA, the radiative transfer model has recently been extended to include OpenMP in an effort to also effectively target MIC architectures such as Intel's Xeon Phi. Using input data provided by the upcoming Radiative Forcing Model Intercomparison Project (RFMIP, as part of CMIP 6), we compare model results and performance data for various model configurations and spectral resolutions run on both GPU and Intel Knights Landing architectures to analogous runs of the standard Oxford Reference Forward Model on traditional CPUs.
A Study of the Relationship Between Personality Characteristics and Ethical Sensitivity in Business.
1992-09-01
Functional Preferences. and Descriptions of the Sixteen Personality Tvyes ISTJ ISFJ INFJ INTJ ISTP ISFP INFP INTP ESTP ESFP ENFP ENTP ESTJ ESFJ ENFJ ENTJ...AD-A258 421lE li(( I U~~l E l iIi - -H AFIT/GCNMLSMI92S-9 A STUDY OF THE RELATIONSHIP BETWEEN PERSONALITY CHARACTERISTICS AND ETHICAL SENSITIVITY IN...Codes Avail and/or Dist Speot8.i AFIT/GCM/LSM/92S-9 A STUDY OF THE RELATIONSHIP BETWEEN PERSONALITY CHARACTERISTICS AND ETHICAL SENSITIVITY IN BUSINESS
NASA Astrophysics Data System (ADS)
Chen, Ying-Wen; Seiki, Tatsuya; Kodama, Chihiro; Satoh, Masaki; Noda, Akira T.
2018-02-01
Satellite observation and general circulation model (GCM) studies suggest that precipitating ice makes nonnegligible contributions to the radiation balance of the Earth. However, in most GCMs, precipitating ice is diagnosed and its radiative effects are not taken into account. Here we examine the longwave radiative impact of precipitating ice using a global nonhydrostatic atmospheric model with a double-moment cloud microphysics scheme. An off-line radiation model is employed to determine cloud radiative effects according to the amount and altitude of each type of ice hydrometeor. Results show that the snow radiative effect reaches 2 W m-2 in the tropics, which is about half the value estimated by previous studies. This effect is strongly dependent on the vertical separation of ice categories and is partially generated by differences in terminal velocities, which are not represented in GCMs with diagnostic precipitating ice. Results from sensitivity experiments that artificially change the categories and altitudes of precipitating ice show that the simulated longwave heating profile and longwave radiation field are sensitive to the treatment of precipitating ice in models. This study emphasizes the importance of incorporating appropriate treatments for the radiative effects of precipitating ice in cloud and radiation schemes in GCMs in order to capture the cloud radiative effects of upper level clouds.
NASA Astrophysics Data System (ADS)
Dhara, Chirag; Renner, Maik; Kleidon, Axel
2015-04-01
The convective transport of heat and moisture plays a key role in the climate system, but the transport is typically parameterized in models. Here, we aim at the simplest possible physical representation and treat convective heat fluxes as the result of a heat engine. We combine the well-known Carnot limit of this heat engine with the energy balances of the surface-atmosphere system that describe how the temperature difference is affected by convective heat transport, yielding a maximum power limit of convection. This results in a simple analytic expression for convective strength that depends primarily on surface solar absorption. We compare this expression with an idealized grey atmosphere radiative-convective (RC) model as well as Global Circulation Model (GCM) simulations at the grid scale. We find that our simple expression as well as the RC model can explain much of the geographic variation of the GCM output, resulting in strong linear correlations among the three approaches. The RC model, however, shows a lower bias than our simple expression. We identify the use of the prescribed convective adjustment in RC-like models as the reason for the lower bias. The strength of our model lies in its ability to capture the geographic variation of convective strength with a parameter-free expression. On the other hand, the comparison with the RC model indicates a method for improving the formulation of radiative transfer in our simple approach. We also find that the latent heat fluxes compare very well among the approaches, as well as their sensitivity to surface warming. What our comparison suggests is that the strength of convection and their sensitivity in the climatic mean can be estimated relatively robustly by rather simple approaches.
NASA Astrophysics Data System (ADS)
Salzmann, M.; Ming, Y.; Golaz, J.-C.; Ginoux, P. A.; Morrison, H.; Gettelman, A.; Krämer, M.; Donner, L. J.
2010-08-01
A new stratiform cloud scheme including a two-moment bulk microphysics module, a cloud cover parameterization allowing ice supersaturation, and an ice nucleation parameterization has been implemented into the recently developed GFDL AM3 general circulation model (GCM) as part of an effort to treat aerosol-cloud-radiation interactions more realistically. Unlike the original scheme, the new scheme facilitates the study of cloud-ice-aerosol interactions via influences of dust and sulfate on ice nucleation. While liquid and cloud ice water path associated with stratiform clouds are similar for the new and the original scheme, column integrated droplet numbers and global frequency distributions (PDFs) of droplet effective radii differ significantly. This difference is in part due to a difference in the implementation of the Wegener-Bergeron-Findeisen (WBF) mechanism, which leads to a larger contribution from super-cooled droplets in the original scheme. Clouds are more likely to be either completely glaciated or liquid due to the WBF mechanism in the new scheme. Super-saturations over ice simulated with the new scheme are in qualitative agreement with observations, and PDFs of ice numbers and effective radii appear reasonable in the light of observations. Especially, the temperature dependence of ice numbers qualitatively agrees with in-situ observations. The global average long-wave cloud forcing decreases in comparison to the original scheme as expected when super-saturation over ice is allowed. Anthropogenic aerosols lead to a larger decrease in short-wave absorption (SWABS) in the new model setup, but outgoing long-wave radiation (OLR) decreases as well, so that the net effect of including anthropogenic aerosols on the net radiation at the top of the atmosphere (netradTOA = SWABS-OLR) is of similar magnitude for the new and the original scheme.
NASA Astrophysics Data System (ADS)
Salzmann, M.; Ming, Y.; Golaz, J.-C.; Ginoux, P. A.; Morrison, H.; Gettelman, A.; Krämer, M.; Donner, L. J.
2010-03-01
A new stratiform cloud scheme including a two-moment bulk microphysics module, a cloud cover parameterization allowing ice supersaturation, and an ice nucleation parameterization has been implemented into the recently developed GFDL AM3 general circulation model (GCM) as part of an effort to treat aerosol-cloud-radiation interactions more realistically. Unlike the original scheme, the new scheme facilitates the study of cloud-ice-aerosol interactions via influences of dust and sulfate on ice nucleation. While liquid and cloud ice water path associated with stratiform clouds are similar for the new and the original scheme, column integrated droplet numbers and global frequency distributions (PDFs) of droplet effective radii differ significantly. This difference is in part due to a difference in the implementation of the Wegener-Bergeron-Findeisen (WBF) mechanism, which leads to a larger contribution from super-cooled droplets in the original scheme. Clouds are more likely to be either completely glaciated or liquid due to the WBF mechanism in the new scheme. Super-saturations over ice simulated with the new scheme are in qualitative agreement with observations, and PDFs of ice numbers and effective radii appear reasonable in the light of observations. Especially, the temperature dependence of ice numbers qualitatively agrees with in-situ observations. The global average long-wave cloud forcing decreases in comparison to the original scheme as expected when super-saturation over ice is allowed. Anthropogenic aerosols lead to a larger decrease in short-wave absorption (SWABS) in the new model setup, but outgoing long-wave radiation (OLR) decreases as well, so that the net effect of including anthropogenic aerosols on the net radiation at the top of the atmosphere (netradTOA = SWABS-OLR) is of similar magnitude for the new and the original scheme.
Organ shielding and doses in Low-Earth orbit calculated for spherical and anthropomorphic phantoms
NASA Astrophysics Data System (ADS)
Matthiä, Daniel; Berger, Thomas; Reitz, Günther
2013-08-01
Humans in space are exposed to elevated levels of radiation compared to ground. Different sources contribute to the total exposure with galactic cosmic rays being the most important component. The application of numerical and anthropomorphic phantoms in simulations allows the estimation of dose rates from galactic cosmic rays in individual organs and whole body quantities such as the effective dose. The male and female reference phantoms defined by the International Commission on Radiological Protection and the hermaphrodite numerical RANDO phantom are voxel implementations of anthropomorphic phantoms and contain all organs relevant for radiation risk assessment. These anthropomorphic phantoms together with a spherical water phantom were used in this work to translate the mean shielding of organs in the different anthropomorphic voxel phantoms into positions in the spherical phantom. This relation allows using a water sphere as surrogate for the anthropomorphic phantoms in both simulations and measurements. Moreover, using spherical phantoms in the calculation of radiation exposure offers great advantages over anthropomorphic phantoms in terms of computational time. In this work, the mean shielding of organs in the different voxel phantoms exposed to isotropic irradiation is presented as well as the corresponding depth in a water sphere. Dose rates for Low-Earth orbit from galactic cosmic rays during solar minimum conditions were calculated using the different phantoms and are compared to the results for a spherical water phantom in combination with the mean organ shielding. For the spherical water phantom the impact of different aluminium shielding between 1 g/cm2 and 100 g/cm2 was calculated. The dose equivalent rates were used to estimate the effective dose rate.
Role of natural polysaccharides in radiation formation of PVA hydrogel wound dressing
NASA Astrophysics Data System (ADS)
Varshney, Lalit
2007-02-01
Radiation processed PVA-polysaccharides hydrogels have been observed to be suitable for producing transparent, flexible, mechanically strong, biocompatible, effective and economical hydrogel dressings. The dressings were formed in single stage irradiation process achieving gel formation and sterilization at 25-30 kGy gamma radiation dose. No synthetic plasticizers and additives were used. Different formulations containing poly-vinylalcohol (PVA) and polysaccharides selected from combinations of agar and carrageenan were used to make the dressings. The selected polysaccharides themselves form thermo-reversible gels and degrade on irradiation. Using concentration of polysaccharides as low as 0.5-2% resulted in increase of tensile strength from 45 g/cm 2 to 411 g/cm 2, elongation from 30% to 410% and water uptake from 25% to 157% with respect to PVA gel without polysaccharides. Besides improving mechanical strength, agar contributes more to elongation and carrageenan to mechanical strength of the gel dressing. PVA formulations containing the polysaccharides show significantly different pre-gel viscosities behaviour. Increasing the concentration of agar in the formulation to about 2% converts the sheet gel to paste gel useful for filling wound cavities. The results indicate that pre irradiation network structure of the formulation plays an important role in determining mechanical properties of the irradiated gel dressing. Formulations containing 7-9% PVA, 0.5-1.5% carrageenan and 0.5-1% agar gave highly effective usable hydrogel dressings. Scanning electron micrographs show highly porous structure of the gel. Clinical trials of wound dressing on human patients established safety and efficacy of the dressing. The dressing has been observed to be useful in treating burns, non-healing ulcers of diabetes, leprosy and other external wounds. The dressings are now being marketed in India under different brand names.
NASA Astrophysics Data System (ADS)
Jha, V.; Kahre, M. A.
2017-12-01
The Mars atmosphere has low levels of dust during Northern Hemisphere (NH) spring and summer (the non-dusty season) and increased levels during NH autumn and winter (the dusty season). In the absence of regional or global storms, dust devils and local storms maintain a background minimum dust loading during the non-dusty season. While observational surveys and Global Climate Model (GCM) studies suggest that dust devils are likely to be major contributors to the background haze during NH spring and summer, a complete understanding of the relative contribution of dust devils and local dust storms has not yet been achieved. We present preliminary results from an investigation that focuses on the effects of radiatively active water ice clouds on dust lifting processes during these seasons. Water ice clouds are known to affect atmospheric temperatures directly by absorption and emission of thermal infrared radiation and indirectly through dynamical feedbacks. Our goal is to understand how clouds affect the contribution by local (wind stress) dust storms to the background dust haze during NH spring and summer. The primary tool for this work is the NASA Ames Mars GCM, which contains physical parameterizations for a fully interactive dust cycle. Three simulations that included wind stress dust lifting were executed for a period of 5 Martian years: a case that included no cloud formation, a case that included radiatively inert cloud formation and a case that included radiatively active cloud (RAC) formation. Results show that when radiatively active clouds are included, the clouds in the aphelion cloud belt radiatively heat the atmosphere aloft in the tropics (Figure 1). This heating produces a stronger overturning circulation, which in turn produces an enhanced low-level flow in the Hadley cell return branch. The stronger low-level flow drives higher surface stresses and increased dust lifting in those locations. We examine how realistic these simulated results are by comparing the spatial pattern of predicted wind stress lifting with a catalog of observed local storms. Better agreement is achieved in the radiatively active cloud case. These results suggest that wind stress lifting may contribute more to maintaining the background dust haze during NH spring and summer than what previous studies have shown.
Simulation of Aerosol Transport and Radiative Effects In Lmd-gcm During Indoex-ifp 1999
NASA Astrophysics Data System (ADS)
Reddy, M. S.; Boucher, O.; Léon, J.-F.; Venkataraman, C.; Pham, M.
During the January-March 1999, an international collaborative field experiment, In- dian Ocean Experiment (INDOEX) was carried out to understand the anthropogenic aerosol effects on radiative forcing (Ramanathan, 2001). In the present work we sim- ulated the cycle of the multi-component aerosol (sulphate, black carbon, organic car- bon, dust, sea-salt and fly-ash) in the Laboratoire de Météorologie Dynamique General Circulation Model (LMD GCM) and estimated the consequent radiative forcing. Sim- ulations are carried out in the zoomed version of the model focusing on the Indian sub- continent and Indian Ocean regions, for January-April 1999. To account correctly for the aerosol emissions in the source regions (Indian subcontinent) we have integrated newly developed SO2 and aerosol emission inventory for India for 1999 (Reddy and Venkataraman, 2002a and b) into the global emission data set input to model. Model performance is evaluated by comparing the simulated aerosol concentration fields against measurements over continental and oceanic stations. Model predicted concentrations agree well in the oceanic stations but are in the lower end of mea- surements in the continental stations. A large plume of sulphate and other aerosols ex- tended from the Indian sub-continent into the Indian Ocean, from surface and elevated flows, extending down to 5S in the pristine southern Indian Ocean. Predicted spec- trally resolved aerosol optical depths (AOD) will be compared with sun-photometer measurements in the region. We also present a comparison of model predicted aerosol optical depths with satellite (Meteosat) derived AOD for the same period. An assess- ment of the multi-component aerosol radiative forcing will be made and results will be discussed in the context of the possible climate effects over the region. Finally, the regional source contributions to sulphate and carbonaceous aerosol loadings in the Indian Ocean will be presented.
Radiation from advanced solid rocket motor plumes
NASA Technical Reports Server (NTRS)
Farmer, Richard C.; Smith, Sheldon D.; Myruski, Brian L.
1994-01-01
The overall objective of this study was to develop an understanding of solid rocket motor (SRM) plumes in sufficient detail to accurately explain the majority of plume radiation test data. Improved flowfield and radiation analysis codes were developed to accurately and efficiently account for all the factors which effect radiation heating from rocket plumes. These codes were verified by comparing predicted plume behavior with measured NASA/MSFC ASRM test data. Upon conducting a thorough review of the current state-of-the-art of SRM plume flowfield and radiation prediction methodology and the pertinent data base, the following analyses were developed for future design use. The NOZZRAD code was developed for preliminary base heating design and Al2O3 particle optical property data evaluation using a generalized two-flux solution to the radiative transfer equation. The IDARAD code was developed for rapid evaluation of plume radiation effects using the spherical harmonics method of differential approximation to the radiative transfer equation. The FDNS CFD code with fully coupled Euler-Lagrange particle tracking was validated by comparison to predictions made with the industry standard RAMP code for SRM nozzle flowfield analysis. The FDNS code provides the ability to analyze not only rocket nozzle flow, but also axisymmetric and three-dimensional plume flowfields with state-of-the-art CFD methodology. Procedures for conducting meaningful thermo-vision camera studies were developed.
TORUS: Radiation transport and hydrodynamics code
NASA Astrophysics Data System (ADS)
Harries, Tim
2014-04-01
TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.
Feuer, Alexis J; Thai, Ashley; Demmer, Ryan T; Vogiatzi, Maria
2016-12-05
Murine studies reveal that sympathetic nervous system activation leads to decreased bone mass. Stimulant medications used to treat attention-deficit/hyperactivity disorder (ADHD) increase sympathetic tone and may affect bone remodeling. Because bone mass accrual is completed by young adulthood, assessing stimulant effects on bone density in growing children is of critical importance. To investigate associations between stimulant use and bone mass in children and adolescents. This cross-sectional analysis used data collected from January 1, 2005, to December 31, 2010, from the National Health and Nutrition Examination Survey (NHANES) database. NHANES is a series of cross-sectional, nationally representative health and nutrition surveys of the US population. All children, adolescents, and young adults aged 8 to 20 years with dual-energy x-ray absorptiometry (DXA), anthropometric, demographic, and prescription medication use data were eligible for participation. Of the 6489 respondents included in the multivariable linear regression analysis, 159 were stimulant users and 6330 were nonusers. Data were analyzed from October 8, 2015, to December 31, 2016. Stimulant use, determined by questionnaires administered via interview. The association between stimulant use and total femur, femoral neck, and lumbar spine bone mineral content (BMC) and bone mineral density (BMD) was assessed using DXA. Study participants included 6489 NHANES participants with a mean (SD) age of 13.6 (3.6) years. Stimulant use was associated with lower bone mass after adjustment for covariates. Mean lumbar spine BMC was significantly lower in stimulant users vs nonusers (12.76 g; 95% CI, 12.28-13.27 g vs 13.38 g; 95% CI, 13.26-13.51 g; P = .02), as was mean lumbar spine BMD (0.90 g/cm2; 95% CI, 0.87-0.94 g/cm2 vs 0.94 g/cm2; 95% CI, 0.94-0.94 g/cm2; P = .03) and mean femoral neck BMC (4.34 g; 95% CI, 4.13-4.57 g vs 4.59 g; 95% CI, 4.56-4.62 g; P = .03). Mean BMD of the femoral neck (0.88 g/cm2; 95% CI, 0.84-0.91 g/cm2 vs 0.91 g/cm2; 95% CI, 0.90-0.91 g/cm2; P = .08) and total femur (0.94 g/cm2; 95% CI, 0.90-0.99 g/cm2 vs 0.99 g/cm2; 95% CI, 0.98-0.99 g/cm2; P = .05) were also lower in stimulant users vs nonusers. Participants treated with stimulants for 3 months or longer had significantly lower lumbar spine BMD (0.89 g/cm2; 95% CI, 0.85-0.93 g/cm2 vs 0.94 g/cm2; 95% CI, 0.94-0.94 g/cm2; P = .02) and BMC (12.71 g; 95% CI, 12.14-13.32 g vs 13.38 g; 95% CI, 13.25-13.51 g; P = .03) and femoral neck BMD (0.87 g/cm2; 95% CI, 0.74-0.83 g/cm2 vs 0.91 g/cm2; 95% CI, 0.83-0.84 g/cm2; P = .048) than nonusers. Children and adolescents reporting stimulant use had lower DXA measurements of the lumbar spine and femur compared with nonusers. These findings support the need for future prospective studies to examine the effects of stimulant use on bone mass in children.
Nicolucci, P; Schuch, F
2012-06-01
To use the Monte Carlo code PENELOPE to study attenuation and tissue equivalence properties of a-Al2O3:C for OSL dosimetry. Mass attenuation coefficients of α-Al2O3 and α-Al2O3:C with carbon percent weight concentrations from 1% to 150% were simulated with PENELOPE Monte Carlo code and compared to mass attenuation coefficients from soft tissue for photon beams ranging from 50kV to 10MV. Also, the attenuation of primary photon beams of 6MV and 10MV and the generation of secondary electrons by α-Al2O3 :C dosimeters positioned on the entrance surface of a water phantom were studied. A difference of up to 90% was found in the mass attenuation coefficient between the pure \\agr;-A12O3 and the material with 150% weight concentration of dopant at 1.5 keV, corresponding to the K-edge photoelectric absorption of aluminum. However for energies above 80 keV the concentration of carbon does not affect the mass attenuation coefficient and the material presents tissue equivalence for the beams studied. The ratio between the mass attenuation coefficients for \\agr-A12O3:C and for soft tissue are less than unit due to the higher density of the \\agr-A12O3 (2.12 g/cm s ) and its tissue equivalence diminishes to lower concentrations of carbon and for lower energies due to the relation of the radiation interaction effects with atomic number. The larger attenuation of the primary photon beams by the dosimeter was 16% at 250 keV and the maximum increase in secondary electrons fluence to the entrance surface of the phantom was found as 91% at 2MeV. The use of the OSL dosimeters in radiation therapy can be optimized by use of PENELOPE Monte Carlo simulation to provide a study of the attenuation and response characteristics of the material. © 2012 American Association of Physicists in Medicine.
Feng, X; Liu, G; Chen, J M; Chen, M; Liu, J; Ju, W M; Sun, R; Zhou, W
2007-11-01
The terrestrial carbon cycle is one of the foci in global climate change research. Simulating net primary productivity (NPP) of terrestrial ecosystems is important for carbon cycle research. In this study, China's terrestrial NPP was simulated using the Boreal Ecosystem Productivity Simulator (BEPS), a carbon-water coupled process model based on remote sensing inputs. For these purposes, a national-wide database (including leaf area index, land cover, meteorology, vegetation and soil) at a 1 km resolution and a validation database were established. Using these databases and BEPS, daily maps of NPP for the entire China's landmass in 2001 were produced, and gross primary productivity (GPP) and autotrophic respiration (RA) were estimated. Using the simulated results, we explore temporal-spatial patterns of China's terrestrial NPP and the mechanisms of its responses to various environmental factors. The total NPP and mean NPP of China's landmass were 2.235 GtC and 235.2 gCm(-2)yr(-1), respectively; the total GPP and mean GPP were 4.418 GtC and 465 gCm(-2)yr(-1); and the total RA and mean RA were 2.227 GtC and 234 gCm(-2)yr(-1), respectively. On average, NPP was 50.6% of GPP. In addition, statistical analysis of NPP of different land cover types was conducted, and spatiotemporal patterns of NPP were investigated. The response of NPP to changes in some key factors such as LAI, precipitation, temperature, solar radiation, VPD and AWC are evaluated and discussed.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2007-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a superparameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (2ICE, several 31CE), Goddard radiation (including explicitly calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generatio11 regional scale model, WRF. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).
A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2005-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data sets (or cloud library) stored at Goddard.
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2006-01-01
Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicitly calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications).
Quantifying the effect of varying GHG's concentration in Regional Climate Models
NASA Astrophysics Data System (ADS)
López-Romero, Jose Maria; Jerez, Sonia; Palacios-Peña, Laura; José Gómez-Navarro, Juan; Jiménez-Guerrero, Pedro; Montavez, Juan Pedro
2017-04-01
Regional Climate Models (RCMs) are driven at the boundaries by Global Circulation Models (GCM), and in the particular case of Climate Change projections, such simulations are forced by varying greenhouse gases (GHGs) concentrations. In hindcast simulations driven by reanalysis products, the climate change signal is usually introduced in the assimilation process as well. An interesting question arising in this context is whether GHGs concentrations have to be varied within the RCMs model itself, or rather they should be kept constant. Some groups keep the GHGs concentrations constant under the assumption that information about climate change signal is given throughout the boundaries; sometimes certain radiation parameterization schemes do not permit such changes. Other approaches vary these concentrations arguing that this preserves the physical coherence respect to the driving conditions for the RCM. This work aims to shed light on this topic. For this task, various regional climate simulations with the WRF model for the 1954-2004 period have been carried out for using a Euro-CORDEX compliant domain. A series of simulations with constant and variable GHGs have been performed using both, a GCM (ECHAM6-OM) and a reanalysis product (ERA-20C) data. Results indicate that there exist noticeable differences when introducing varying GHGs concentrations within the RCM domain. The differences in 2-m temperature series between the experiments with varying or constant GHGs concentration strongly depend on the atmospheric conditions, appearing a strong interannual variability. This suggests that short-term experiments are not recommended if the aim is to assess the role of varying GHGs. In addition, and consistently in both GCM and reanalysis-driven experiments, the magnitude of temperature trends, as well as the spatial pattern represented by varying GHGs experiment, are closer to the driving dataset than in experiments keeping constant the GHGs concentration. These results point towards the need for the inclusion of varying GHGs concentration within the RCM itself when dynamically downscaling global datasets, both in GCM and hindcast simulations.
HZETRN radiation transport validation using balloon-based experimental data
NASA Astrophysics Data System (ADS)
Warner, James E.; Norman, Ryan B.; Blattnig, Steve R.
2018-05-01
The deterministic radiation transport code HZETRN (High charge (Z) and Energy TRaNsport) was developed by NASA to study the effects of cosmic radiation on astronauts and instrumentation shielded by various materials. This work presents an analysis of computed differential flux from HZETRN compared with measurement data from three balloon-based experiments over a range of atmospheric depths, particle types, and energies. Model uncertainties were quantified using an interval-based validation metric that takes into account measurement uncertainty both in the flux and the energy at which it was measured. Average uncertainty metrics were computed for the entire dataset as well as subsets of the measurements (by experiment, particle type, energy, etc.) to reveal any specific trends of systematic over- or under-prediction by HZETRN. The distribution of individual model uncertainties was also investigated to study the range and dispersion of errors beyond just single scalar and interval metrics. The differential fluxes from HZETRN were generally well-correlated with balloon-based measurements; the median relative model difference across the entire dataset was determined to be 30%. The distribution of model uncertainties, however, revealed that the range of errors was relatively broad, with approximately 30% of the uncertainties exceeding ± 40%. The distribution also indicated that HZETRN systematically under-predicts the measurement dataset as a whole, with approximately 80% of the relative uncertainties having negative values. Instances of systematic bias for subsets of the data were also observed, including a significant underestimation of alpha particles and protons for energies below 2.5 GeV/u. Muons were found to be systematically over-predicted at atmospheric depths deeper than 50 g/cm2 but under-predicted for shallower depths. Furthermore, a systematic under-prediction of alpha particles and protons was observed below the geomagnetic cutoff, suggesting that improvements to the light ion production cross sections in HZETRN should be investigated.
Overlap Properties of Clouds Generated by a Cloud Resolving Model
NASA Technical Reports Server (NTRS)
Oreopoulos, L.; Khairoutdinov, M.
2002-01-01
In order for General Circulation Models (GCMs), one of our most important tools to predict future climate, to correctly describe the propagation of solar and thermal radiation through the cloudy atmosphere a realistic description of the vertical distribution of cloud amount is needed. Actually, one needs not only the cloud amounts at different levels of the atmosphere, but also how these cloud amounts are related, in other words, how they overlap. Currently GCMs make some idealized assumptions about cloud overlap, for example that contiguous cloud layers overlap maximally and non-contiguous cloud layers overlap in a random fashion. Since there are difficulties in obtaining the vertical profile of cloud amount from observations, the realism of the overlap assumptions made in GCMs has not been yet rigorously investigated. Recently however, cloud observations from a relatively new type of ground radar have been used to examine the vertical distribution of cloudiness. These observations suggest that the GCM overlap assumptions are dubious. Our study uses cloud fields from sophisticated models dedicated to simulate cloud formation, maintenance, and dissipation called Cloud Resolving Models . These models are generally considered capable of producing realistic three-dimensional representation of cloudiness. Using numerous cloud fields produced by such a CRM we show that the degree of overlap between cloud layers is a function of their separation distance, and is in general described by a combination of the maximum and random overlap assumption, with random overlap dominating as separation distances increase. We show that it is possible to parameterize this behavior in a way that can eventually be incorporated in GCMs. Our results seem to have a significant resemblance to the results from the radar observations despite the completely different nature of the datasets. This consistency is encouraging and will promote development of new radiative transfer codes that will estimate the radiation effects of multi-layer cloud fields more accurately.
Effects of Aerosol on Atmospheric Dynamics and Hydrologic Processes During Boreal Spring and Summer
NASA Technical Reports Server (NTRS)
Lau, William K. M.; Kim, M. K.; Kim, K. M.; Chin, Mian
2005-01-01
Global and regional climate impacts of present-day aerosol loading during boreal spring are investigated using the NASA finite volume General Circulation Model (fvGCM). Three-dimensional distributions of loadings of five species of tropospheric aerosols, i.e., sulfate, black carbon, organic carbon, soil dust, and sea salt are prescribed from outputs of the Goddard Ozone Chemistry Aerosol Radiation and Transport model (GOCART). The aerosol loadings are used to calculate the extinction coefficient, single scattering albedo, and asymmetric factor at eleven spectral wavelengths in the radiative transfer code. We find that aerosol-radiative forcing during boreal spring excites a wavetrain-like pattern in tropospheric temperature and geopotential height that emanates from Northern Africa, through Eurasia, to northeastern Pacific. Associated with the teleconnection is strong surface cooling over regions with large aerosol loading, i.e., China, India, and Africa. Low-to-mid tropospheric heating due to shortwave absorption is found in regions with large loading of dust (Northern Africa, and central East Asia), and black carbon (South and East Asia). In addition pronounced surface cooling is found over the Caspian Sea and warming over Eurasian and northeastern Asia, where aerosol loadings are relatively low. These warming and cooling are components of teleconnection pattern produced primarily by atmospheric heating from absorbing aerosols, i.e., dust from North Africa and black carbon from South and East Asia. Effects of aerosols on atmospheric hydrologic cycle in the Asian monsoon region are also investigated. Results show that absorbing aerosols, i.e., black carbon and dust, induce large-scale upper-level heating anomaly over the Tibetan Plateau in April and May, ushering in an early onset of the Indian summer monsoon. Absorbing aerosols also enhance lower-level heating and anomalous ascent over northern India, intensifying the Indian monsoon. Overall, the aerosol-induced large-scale surface tempera- cooling leads to a reduction of monsoon rainfall over the East Asia continent, and adjacent oceanic regions.
Effects of Aerosol on Atmospheric Dynamics and Hydrologic Processes during Boreal Spring and Summer
NASA Technical Reports Server (NTRS)
Lau, William K. M.; Kim, M. K.; Chin, Mian; Kim, K. M.
2005-01-01
Global and regional climate impacts of present-day aerosol loading during boreal spring are investigated using the NASA finite volume General Circulation Model (fvGCM). Three-dimensional distributions of loadings of five species of tropospheric aerosols, i.e., sulfate, black carbon, organic carbon, soil dust, and sea salt are prescribed from outputs of the Goddard Ozone Chemistry Aerosol Radiation and Transport model (GOCART). The aerosol loadings are used to calculate the extinction coefficient, single scattering albedo, and asymmetric factor at eleven spectral wavelengths in the radiative transfer code. We find that aerosol-radiative forcing during boreal spring excites a wavetrain-like pattern in tropospheric temperature and geopotential height that emanates from Northern Africa, through Eurasia, to northeastern Pacific. Associated with the teleconnection is strong surface cooling over regions with large aerosol loading, i.e., China, India, and Africa. Low-to-mid tropospheric heating due to shortwave absorption is found in regions with large loading of dust (Northern Africa, and central East Asia), and black carbon (South and East Asia). In addition pronounced surface cooling is found over the Caspian Sea and warming over Eurasian and northeastern Asia, where aerosol loadings are relatively low. These warming and cooling are components of teleconnection pattern produced primarily by atmospheric heating from absorbing aerosols, i.e., dust from North Africa and.black carbon from South and East Asia. Effects of aerosols on atmospheric hydrologic cycle in the Asian monsoon region are also investigated. Results show that absorbing aerosols, i.e., black carbon and dust, induce large-scale upper-level heating anomaly over the Tibetan Plateau in April and May, ushering in an early onset of the Indian summer monsoon. Absorbing aerosols also enhance lower-level heating and anomalous ascent over northern India, intensifying the Indian monsoon. Overall, the aerosol-induced large-scale surface temperature cooling leads to a reduction of monsoon rainfall over the East Asia continent, and adjacent oceanic regions.
Measurements of cosmic-ray electrons and positrons by the Wizard/CAPRICE collaboration
NASA Astrophysics Data System (ADS)
Boezio, M.; Barbiellini, G.; Bonvicini, V.; Schiavon, P.; Vacchi, A.; Zampa, N.; Bergström, D.; Carlson, P.; Francke, T.; Grinstein, S.; Weber, N.; Suffert, M.; Hof, M.; Kremer, J.; Menn, W.; Simon, M.; Stephens, S. A.; Ambriola, M.; Bellotti, R.; Cafagna, F. S.; Ciacio, F.; Circella, M.; De Marzo, C.; Finetti, N.; Papini, P.; Piccardi, S.; Spillantini, P.; Bartalucci, S.; Ricci, M.; Grimani, C.; Casolino, M.; De Pascale, M. P.; Morselli, A.; Picozza, P.; Sparvoli, R.; Mitchell, J. W.; Ormes, J. F.; Streitmatter, R. E.; Bravar, U.; Stochaj, S. J.
Two recent ballon-borne experiments have been performed by the WiZard/CAPRICE collaboration in order to study the electron and positron components in the cosmic radiation. On 1994 August 8-9 the CAPRICE94 experiment flew from norther Canada and on 1998 May 28-29 the CAPRICE98 experiment flew from New Mexico, USA at altitudes corresponding to 3.9 and 5.5 g/cm 2 of average residual atmosphere respectively. The apparatus were equipped with a Ring Imaging Cherenkov (RICH) detector, a time-of-flight system, a superconducting magnet spectrometer with a tracking system and a 7-radiation-length silicon-tungsten imaging calorimeter. The RICH used in 1994 had a solid NaF radiator while in 1998 the RICH had a C 4F 10 gaseous radiator. We report on the electron and positron spectra and positron fraction at the top of the atmosphere from few hundred MeV to 40 GeV measured by these two experiments.
K-shell photoabsorption edge of strongly coupled aluminum driven by laser-converted radiation
NASA Astrophysics Data System (ADS)
Zhao, Yang; Zhang, Zhiyu; Qing, Bo; Yang, Jiamin; Zhang, Jiyan; Wei, Minxi; Yang, Guohong; Song, Tianming; Xiong, Gang; Lv, Min; Hu, Zhimin; Deng, Bo; Hu, Xin; Zhang, Wenhai; Shang, Wanli; Hou, Lifei; Du, Huabing; Zhan, Xiayu; Yu, Ruizhen
2017-03-01
The first observation of the K-shell photoabsorption edge of strongly coupled aluminum generated by intense x-ray radiation-driven shocks is reported. By using a “dog bone” gold hohlraum as an x-ray converter, colliding shocks compression and preheating shielding are achieved to generate an unexplored state with a density of 5.5 g/cm3 and temperature of 0.43 eV (the ion-ion coupling parameter Γii is around 240). The time-resolved K-shell photoabsorption edges are measured with a crystal spectrometer using a short x-ray backlighter. The broadenings and redshifts of the edges are studied by using the slope fitting of the edge and quantum molecular dynamics calculations. This work shows that the K-edge of aluminum driven by laser-converted radiation provides a novel capability to probe WDM at extended conditions.
The effects of atmospheric cloud radiative forcing on climate
NASA Technical Reports Server (NTRS)
Randall, David A.
1989-01-01
In order to isolate the effects of atmospheric cloud radiative forcing (ACRF) on climate, the general circulation of an ocean-covered earth called 'Seaworld' was simulated using the Colorado State University GCM. Most current climate models, however, do not include an interactive ocean. The key simplifications in 'Seaworld' are the fixed boundary temperature with no land points, the lack of mountains and the zonal uniformity of the boundary conditions. Two 90-day 'perpetual July' simulations were performed and analyzed the last sixty days of each. The first run included all the model's physical parameterizations, while the second omitted the effects of clouds in both the solar and terrestrial radiation parameterizations. Fixed and identical boundary temperatures were set for the two runs, and resulted in differences revealing the direct and indirect effects of the ACRF on the large-scale circulation and the parameterized hydrologic processes.
Overview of the United States Department of Energy's ARM (Atmospheric Radiation Measurement) Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stokes, G.M.; Tichler, J.L.
The Department of Energy (DOE) is initiating a major atmospheric research effort, the Atmospheric Radiation Measurement Program (ARM). The program is a key component of DOE's research strategy to address global climate change and is a direct continuation of DOE's decade-long effort to improve the ability of General Circulation Models (GCMs) to provide reliable simulations of regional, and long-term climate change in response to increasing greenhouse gases. The effort is multi-disciplinary and multi-agency, involving universities, private research organizations and more than a dozen government laboratories. The objective of the ARM Research is to provide an experimental testbed for the studymore » of important atmospheric effects, particularly cloud and radiative processes, and to test parameterizations of these processes for use in atmospheric models. This effort will support the continued and rapid improvement of GCM predictive capability. 2 refs.« less
Modeling CO 2 ice clouds with a Mars Global Climate Model
NASA Astrophysics Data System (ADS)
Audouard, Joachim; Määttänen, Anni; Listowski, Constantino; Millour, Ehouarn; Forget, Francois; Spiga, Aymeric
2016-10-01
Since the first claimed detection of CO2 ice clouds by the Mariner campaign (Herr and Pimentel, 1970), more recent observations and modelling works have put new constraints concerning their altitude, region, time and mechanisms of formation (Clancy and Sandor, 1998; Montmessin et al., 2007; Colaprete et al., 2008; Määttänen et al., 2010; Vincendon et al., 2011; Spiga et al. 2012; Listowski et al. 2014). CO2 clouds are observed at the poles at low altitudes (< 20 km) during the winter and at high altitudes (60-110 km) in the equatorial regions during the first half of the year. However, Martian CO2 clouds's variability and dynamics remain somehow elusive.Towards an understanding of Martian CO2 clouds and especially of their precise radiative impact on the climate throughout the history of the planet, including their formation and evolution in a Global Climate Model (GCM) is necessary.Adapting the CO2 clouds microphysics modeling work of Listowski et al. (2013; 2014), we aim at implementing a complete CO2 clouds scheme in the GCM of the Laboratoire de Météorologie Dynamique (LMD, Forget et al., 1999). It covers CO2 microphysics, growth, evolution and dynamics with a methodology inspired from the water ice clouds scheme recently included in the LMD GCM (Navarro et al., 2014).Two main factors control the formation and evolution of CO2 clouds in the Martian atmosphere: sufficient supersaturation of CO2 is needed and condensation nuclei must be available. Topography-induced gravity-waves (GW) are expected to propagate to the upper atmosphere where they produce cold pockets of supersaturated CO2 (Spiga et al., 2012), thus allowing the formation of clouds provided enough condensation nuclei are present. Such supersaturations have been observed by various instruments, in situ (Schofield et al., 1997) and from orbit (Montmessin et al., 2006, 2011; Forget et al., 2009).Using a GW-induced temperature profile and the 1-D version of the GCM, we simulate the formation of CO2 clouds in the mesosphere and investigate the sensitivity of our microphysics scheme. First results and steps towards the integration in the 3-D GCM will be presented and discussed at the conference.This work is funded by the Laboratory of Excellence ESEP.
NASA Technical Reports Server (NTRS)
Chambers, Lin Hartung
1994-01-01
The theory for radiation emission, absorption, and transfer in a thermochemical nonequilibrium flow is presented. The expressions developed reduce correctly to the limit at equilibrium. To implement the theory in a practical computer code, some approximations are used, particularly the smearing of molecular radiation. Details of these approximations are presented and helpful information is included concerning the use of the computer code. This user's manual should benefit both occasional users of the Langley Optimized Radiative Nonequilibrium (LORAN) code and those who wish to use it to experiment with improved models or properties.
The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE
NASA Astrophysics Data System (ADS)
Vandenbroucke, B.; Wood, K.
2018-04-01
We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.
NASA Technical Reports Server (NTRS)
Kung, E. C.
1984-01-01
Energetics characteristics of Goddard Laboratory for Atmospheric Sciences (GLAS) General Circulation Models (GCM) as they are reflected on the First GARD GLobal Experiment (FGGE) analysis data set are discussed. Energetics descriptions of GLAS GCM forecast experiments are discussed as well as Eneretics response of GLAS GCM climatic simulation experiments.
Tests of Exoplanet Atmospheric Radiative Transfer Codes
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Challener, Ryan; DeLarme, Emerson; Cubillos, Patricio; Blecic, Jasmina; Foster, Austin; Garland, Justin
2016-10-01
Atmospheric radiative transfer codes are used both to predict planetary spectra and in retrieval algorithms to interpret data. Observational plans, theoretical models, and scientific results thus depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. In the process of writing our own code, we became aware of several others with artifacts of unknown origin and even outright errors in their spectra. We present a series of tests to verify atmospheric radiative-transfer codes. These include: simple, single-line line lists that, when combined with delta-function abundance profiles, should produce a broadened line that can be verified easily; isothermal atmospheres that should produce analytically-verifiable blackbody spectra at the input temperatures; and model atmospheres with a range of complexities that can be compared to the output of other codes. We apply the tests to our own code, Bayesian Atmospheric Radiative Transfer (BART) and to several other codes. The test suite is open-source software. We propose this test suite as a standard for verifying current and future radiative transfer codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
NASA Technical Reports Server (NTRS)
Pecaut, Michael J.; Haerich, Paul; Zuccarelli, Cara N.; Smith, Anna L.; Zendejas, Eric D.; Nelson, Gregory A.
2002-01-01
Two experiments were carried out to investigate the consequences of exposure to proton radiation, such as might occur for astronauts during space flight. C57BL/6 mice were exposed, either with or without 15-g/cm2 aluminum shielding, to 0-, 3-, or 4-Gy proton irradiation mimicking features of a solar particle event. Irradiation produced transient direct deficits in open-field exploratory behavior and acoustic startle habituation. Rotorod performance at 18 rpm was impaired by exposure to proton radiation and was impaired at 26 rpm, but only for mice irradiated with shielding and at the 4-Gy dose. Long-term (>2 weeks) indirect deficits in open-field activity appeared as a result of impaired experiential encoding immediately following exposure. A 2-week recovery prior to testing decreased most of the direct effects of exposure, with only rotorod performance at 26 rpm being impaired. These results suggest that the performance deficits may have been mediated by radiation damage to hippocampal, cerebellar, and possibly, forebrain dopaminergic function.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less
Airborne antenna radiation pattern code user's manual
NASA Technical Reports Server (NTRS)
Burnside, Walter D.; Kim, Jacob J.; Grandchamp, Brett; Rojas, Roberto G.; Law, Philip
1985-01-01
The use of a newly developed computer code to analyze the radiation patterns of antennas mounted on a ellipsoid and in the presence of a set of finite flat plates is described. It is shown how the code allows the user to simulate a wide variety of complex electromagnetic radiation problems using the ellipsoid/plates model. The code has the capacity of calculating radiation patterns around an arbitrary conical cut specified by the user. The organization of the code, definition of input and output data, and numerous practical examples are also presented. The analysis is based on the Uniform Geometrical Theory of Diffraction (UTD), and most of the computed patterns are compared with experimental results to show the accuracy of this solution.
Comparison of codes assessing galactic cosmic radiation exposure of aircraft crew.
Bottollier-Depois, J F; Beck, P; Bennett, B; Bennett, L; Bütikofer, R; Clairand, I; Desorgher, L; Dyer, C; Felsberger, E; Flückiger, E; Hands, A; Kindl, P; Latocha, M; Lewis, B; Leuthold, G; Maczka, T; Mares, V; McCall, M J; O'Brien, K; Rollet, S; Rühm, W; Wissmann, F
2009-10-01
The assessment of the exposure to cosmic radiation onboard aircraft is one of the preoccupations of bodies responsible for radiation protection. Cosmic particle flux is significantly higher onboard aircraft than at ground level and its intensity depends on the solar activity. The dose is usually estimated using codes validated by the experimental data. In this paper, a comparison of various codes is presented, some of them are used routinely, to assess the dose received by the aircraft crew caused by the galactic cosmic radiation. Results are provided for periods close to solar maximum and minimum and for selected flights covering major commercial routes in the world. The overall agreement between the codes, particularly for those routinely used for aircraft crew dosimetry, was better than +/-20 % from the median in all but two cases. The agreement within the codes is considered to be fully satisfactory for radiation protection purposes.
Space Radiation Transport Codes: A Comparative Study for Galactic Cosmic Rays Environment
NASA Astrophysics Data System (ADS)
Tripathi, Ram; Wilson, John W.; Townsend, Lawrence W.; Gabriel, Tony; Pinsky, Lawrence S.; Slaba, Tony
For long duration and/or deep space human missions, protection from severe space radiation exposure is a challenging design constraint and may be a potential limiting factor. The space radiation environment consists of galactic cosmic rays (GCR), solar particle events (SPE), trapped radiation, and includes ions of all the known elements over a very broad energy range. These ions penetrate spacecraft materials producing nuclear fragments and secondary particles that damage biological tissues, microelectronic devices, and materials. In deep space missions, where the Earth's magnetic field does not provide protection from space radiation, the GCR environment is significantly enhanced due to the absence of geomagnetic cut-off and is a major component of radiation exposure. Accurate risk assessments critically depend on the accuracy of the input information as well as radiation transport codes used, and so systematic verification of codes is necessary. In this study, comparisons are made between the deterministic code HZETRN2006 and the Monte Carlo codes HETC-HEDS and FLUKA for an aluminum shield followed by a water target exposed to the 1977 solar minimum GCR spectrum. Interaction and transport of high charge ions present in GCR radiation environment provide a more stringent constraint in the comparison of the codes. Dose, dose equivalent and flux spectra are compared; details of the comparisons will be discussed, and conclusions will be drawn for future directions.
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2010-01-01
A multi-scale modeling system with unified physics has been developed at NASA Goddard Space Flight Center (GSFC). The system consists of an MMF, the coupled NASA Goddard finite-volume GCM (fvGCM) and Goddard Cumulus Ensemble model (GCE, a CRM); the state-of-the-art Weather Research and Forecasting model (WRF) and the stand alone GCE. These models can share the same microphysical schemes, radiation (including explicitly calculated cloud optical properties), and surface models that have been developed, improved and tested for different environments. In this talk, I will present: (1) A brief review on GCE model and its applications on the impact of the aerosol on deep precipitation processes, (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications). We are also performing the inline tracer calculation to comprehend the ph ysical processes (i.e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes and mesoscale convective systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendonça, João M.; Grimm, Simon L.; Grosheintz, Luc
We have designed and developed, from scratch, a global circulation model (GCM) named THOR that solves the three-dimensional nonhydrostatic Euler equations. Our general approach lifts the commonly used assumptions of a shallow atmosphere and hydrostatic equilibrium. We solve the “pole problem” (where converging meridians on a sphere lead to increasingly smaller time steps near the poles) by implementing an icosahedral grid. Irregularities in the grid, which lead to grid imprinting, are smoothed using the “spring dynamics” technique. We validate our implementation of spring dynamics by examining calculations of the divergence and gradient of test functions. To prevent the computational timemore » step from being bottlenecked by having to resolve sound waves, we implement a split-explicit method together with a horizontally explicit and vertically implicit integration. We validate our GCM by reproducing the Earth and hot-Jupiter-like benchmark tests. THOR was designed to run on graphics processing units (GPUs), which allows for physics modules (radiative transfer, clouds, chemistry) to be added in the future, and is part of the open-source Exoclimes Simulation Platform (www.exoclime.org).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiaoqing Wu; Xin-Zhong Liang; Sunwook Park
2007-01-23
The works supported by this ARM project lay the solid foundation for improving the parameterization of subgrid cloud-radiation interactions in the NCAR CCSM and the climate simulations. We have made a significant use of CRM simulations and concurrent ARM observations to produce long-term, consistent cloud and radiative property datasets at the cloud scale (Wu et al. 2006, 2007). With these datasets, we have investigated the mesoscale enhancement of cloud systems on surface heat fluxes (Wu and Guimond 2006), quantified the effects of cloud horizontal inhomogeneity and vertical overlap on the domain-averaged radiative fluxes (Wu and Liang 2005), and subsequently validatedmore » and improved the physically-based mosaic treatment of subgrid cloud-radiation interactions (Liang and Wu 2005). We have implemented the mosaic treatment into the CCM3. The 5-year (1979-1983) AMIP-type simulation showed significant impacts of subgrid cloud-radiation interaction on the climate simulations (Wu and Liang 2005). We have actively participated in CRM intercomparisons that foster the identification and physical understanding of common errors in cloud-scale modeling (Xie et al. 2005; Xu et al. 2005, Grabowski et al. 2005).« less
TRAP/SEE Code Users Manual for Predicting Trapped Radiation Environments
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
TRAP/SEE is a PC-based computer code with a user-friendly interface which predicts the ionizing radiation exposure of spacecraft having orbits in the Earth's trapped radiation belts. The code incorporates the standard AP8 and AE8 trapped proton and electron models but also allows application of an improved database interpolation method. The code treats low-Earth as well as highly-elliptical Earth orbits, taking into account trajectory perturbations due to gravitational forces from the Moon and Sun, atmospheric drag, and solar radiation pressure. Orbit-average spectra, peak spectra per orbit, and instantaneous spectra at points along the orbit trajectory are calculated. Described in this report are the features, models, model limitations and uncertainties, input and output descriptions, and example calculations and applications for the TRAP/SEE code.
A Dynamical Downscaling Approach with GCM Bias Corrections and Spectral Nudging
NASA Astrophysics Data System (ADS)
Xu, Z.; Yang, Z.
2013-12-01
To reduce the biases in the regional climate downscaling simulations, a dynamical downscaling approach with GCM bias corrections and spectral nudging is developed and assessed over North America. Regional climate simulations are performed with the Weather Research and Forecasting (WRF) model embedded in the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). To reduce the GCM biases, the GCM climatological means and the variances of interannual variations are adjusted based on the National Centers for Environmental Prediction-NCAR global reanalysis products (NNRP) before using them to drive WRF which is the same as our previous method. In this study, we further introduce spectral nudging to reduce the RCM-based biases. Two sets of WRF experiments are performed with and without spectral nudging. All WRF experiments are identical except that the initial and lateral boundary conditions are derived from the NNRP, the original GCM output, and the bias corrected GCM output, respectively. The GCM-driven RCM simulations with bias corrections and spectral nudging (IDDng) are compared with those without spectral nudging (IDD) and North American Regional Reanalysis (NARR) data to assess the additional reduction in RCM biases relative to the IDD approach. The results show that the spectral nudging introduces the effect of GCM bias correction into the RCM domain, thereby minimizing the climate drift resulting from the RCM biases. The GCM bias corrections and spectral nudging significantly improve the downscaled mean climate and extreme temperature simulations. Our results suggest that both GCM bias corrections or spectral nudging are necessary to reduce the error of downscaled climate. Only one of them does not guarantee better downscaling simulation. The new dynamical downscaling method can be applied to regional projection of future climate or downscaling of GCM sensitivity simulations. Annual mean RMSEs. The RMSEs are computed over the verification region by monthly mean data over 1981-2010. Experimental design
A serosa-searing apparatus for producing gastric ulcer in rats.
Kagoshima, M; Suzuki, T; Katagiri, S; Shimada, H
1994-12-01
Chronic ulcer models produced by serosa-searing method are very similar histologically to the ulcer healing process occurring in humans. In an effort to produce a serosa-searing chronic ulcer model in rats, we devised a new balance-type apparatus. This searing apparatus is capable of changing adequately both temperature and duration of time. Furthermore, the pressure which serves to bring the searing iron tip into contact with the stomach serosa surface can also be precisely changed. Optimal conditions for reproducing the serosa-searing ulcer model were at 65 degrees C and in 5 sec. Moreover, in order to evaluate the effects of pressure, various pressure levels (A: 5 g, 17.68 g/cm2; B: 10 g, 35.37 g/cm2; C: 15 g, 53.05 g/cm2; D: 20 g, 70.74 g/cm2; E: 25 g, 88.42 g/cm2; F: 30 g, 106.10 g/cm2; G: 35 g, 123.79 g/cm2 (+/- 1 g, 0.149 g/cm2)) of 5-sec duration at 65 +/- 0.1 degrees C were used. Macroscopically, gastric mucosal lesions were most clearly observed in a pressure-related manner 7 days after the procedure. Histologically, definite deep ulcerations (UI-III or UI-IV) were observed at pressure level C (15 g, 53.05 g/cm2) or more. The highest incidence (87%) of histological gastric ulcers (UI-IV) was observed in pressure level E (25 g, 88.42 g/cm2). The healing process was observed at 40 to 60 days postoperatively. At 100 days after the procedure, recurrences were observed both macroscopically and histologically. In conclusion, this new apparatus is very useful for reproducing a chronic ulcer model for observing the healing and recurrence process.
NASA Astrophysics Data System (ADS)
Kumar, B. D.; Verma, S.; Wang, R.; Boucher, O.
2016-12-01
In the present study, we evaluated aerosol constituents of the model using the measurements during premonsoon over Indo-Gangetic plain (IGP) to Himalayan foothills. Aerosol transport simulations were carried out in general circulation model (GCM) of Laboratoire de M ´et ´eorologie Dynamique (LMD-GCM) with three set of emissions including Indian emissions in GCM-Indemiss, global emissions in GCM coupled with aerosol interactive chemistry (GCM-INCA-I), and the global emissions with updated BC emission inventory over Asia in GCM-INCA-II. Among three models, GCM-indemiss reproduced measured single scattering albedo (SSA) at 670 nm with a relative bias of 5%. However, the estimated 30-50% of the measured aerosol optical depth (AOD) at 550 nm and 20-60% of the measured surface concentration of aerosol constituents (e.g. black carbon (BC), organic carbon (OC), and sulfate) at most of the times over the study period. Inability of model to reproduce observed AOD changes was attributed to the paucity of emissions represented in the model. Design of retrieval simulations using existing GCM-indemiss estimates was further carried out. Retrieval simulations have produced better results, which showed constituent surface concentration in the vicinity of the measurements with normalized mean bias (NMB) of <30%. Scatter analysis between surface and elevated contribution of region's emissions showed anthropogenic emissions from the IGP on anthropogenic days and the north west India (NWI) on anthropogenic with dust days influence aerosols over northern India (NI). Our analysis showed BC emissions from base inventory for the corresponding grids of source region influencing NI were lower by 200% compared to that of modified scenario. These emissions will further be implemented in an atmospheric GCM to evaluate their performance validating with measurements data.
Gary-Chicago-Milwaukee corridor : corridor transportation information center : system glossary
DOT National Transportation Integrated Search
1995-10-30
The following definitions, abbreviations and acronyms are generated from the : System Definition Document (Document #9931.GCM), the Interface Control : Specification (Document #9932.GCM), and the Requirements Specification (Document : #9933.GCM). The...
Song, Xinhua; Yin, Shutao; Zhang, Enxiang; Fan, Lihong; Ye, Min; Zhang, Yong; Hu, Hongbo
2016-10-04
Glycycoumarin (GCM) is a major bioactive coumarin compound isolated from licorice and the anti-cancer activity of GCM has not been scientifically addressed. In the present study, we have tested the anti-liver cancer activity of GCM using both in vitro and in vivo models and found for the first time that GCM possesses a potent activity against liver cancer evidenced by cell growth inhibition and apoptosis induction in vitro and tumor reduction in vivo. Mechanistically, GCM was able to bind to and inactivate oncogenic kinase T-LAK cell-originated protein kinase (TOPK), which in turn led to activation of p53 pathway. Our findings supported GCM as a novel active compound that contributed to the anti-cancer activity of licorice and TOPK could be an effective target for hepatocellular carcinoma (HCC) treatment.
Use of computer code for dose distribution studies in A 60CO industrial irradiator
NASA Astrophysics Data System (ADS)
Piña-Villalpando, G.; Sloan, D. P.
1995-09-01
This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).
Tsushima, Yoko; Brient, Florent; Klein, Stephen A.; ...
2017-11-27
The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. Here, this paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments willmore » also be facilitated by the sharing of diagnostic codes via this catalogue. Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsushima, Yoko; Brient, Florent; Klein, Stephen A.
The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. Here, this paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments willmore » also be facilitated by the sharing of diagnostic codes via this catalogue. Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.« less
2013-07-01
also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32 B. MCNP PHYSICS OPTIONS ......................................................................................... 33 C. HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon
X-ray opacity measurements in mid-Z dense plasmas with a new target design of indirect heating
NASA Astrophysics Data System (ADS)
Dozières, M.; Thais, F.; Bastiani-Ceccotti, S.; Blenski, T.; Fariaut, J.; Fölsner, W.; Gilleron, F.; Khaghani, D.; Pain, J.-C.; Reverdin, C.; Rosmej, F.; Silvert, V.; Soullié, G.; Villette, B.
2015-12-01
X-ray transmission spectra of copper, nickel and aluminum laser produced plasmas were measured at the LULI2000 laser facility with an improved target design of indirect heating. Measurements were performed in plasmas close to local thermodynamic equilibrium at temperatures around 25 eV and densities between 10-3g/cm3 and 10-2 g/cm3. This improved design provides several advantages, which are discussed in this paper. The sample is a thin foil of mid-Z material inserted between two gold cavities heated by two 300J, 2ω, nanosecond laser beams. A third laser beam irradiates a gold foil to create a spectrally continuous X-ray source (backlight) used to probe the sample. We investigate 2p-3d absorption structures in Ni and Cu plasmas as well as 1s-2p transitions in an additional Al plasma layer to infer the in-situ plasma temperature. Geometric and hydrodynamic calculations indicate that the improved geometry reduces spatial gradients during the transmission measurements. Experimental absorption spectra are in good agreement with calculations from the hybrid atomic physics code SCO-RCG.
NASA Technical Reports Server (NTRS)
Meyer, H. D.
1993-01-01
The Acoustic Radiation Code (ARC) is a finite element program used on the IBM mainframe to predict far-field acoustic radiation from a turbofan engine inlet. In this report, requirements for developers of internal aerodynamic codes regarding use of their program output an input for the ARC are discussed. More specifically, the particular input needed from the Bolt, Beranek and Newman/Pratt and Whitney (turbofan source noise generation) Code (BBN/PWC) is described. In a separate analysis, a method of coupling the source and radiation models, that recognizes waves crossing the interface in both directions, has been derived. A preliminary version of the coupled code has been developed and used for initial evaluation of coupling issues. Results thus far have shown that reflection from the inlet is sufficient to indicate that full coupling of the source and radiation fields is needed for accurate noise predictions ' Also, for this contract, the ARC has been modified for use on the Sun and Silicon Graphics Iris UNIX workstations. Changes and additions involved in this effort are described in an appendix.
Pothos, Emmanuel M; Bailey, Todd M
2009-07-01
Naïve observers typically perceive some groupings for a set of stimuli as more intuitive than others. The problem of predicting category intuitiveness has been historically considered the remit of models of unsupervised categorization. In contrast, this article develops a measure of category intuitiveness from one of the most widely supported models of supervised categorization, the generalized context model (GCM). Considering different category assignments for a set of instances, the authors asked how well the GCM can predict the classification of each instance on the basis of all the other instances. The category assignment that results in the smallest prediction error is interpreted as the most intuitive for the GCM-the authors refer to this way of applying the GCM as "unsupervised GCM." The authors systematically compared predictions of category intuitiveness from the unsupervised GCM and two models of unsupervised categorization: the simplicity model and the rational model. The unsupervised GCM compared favorably with the simplicity model and the rational model. This success of the unsupervised GCM illustrates that the distinction between supervised and unsupervised categorization may need to be reconsidered. However, no model emerged as clearly superior, indicating that there is more work to be done in understanding and modeling category intuitiveness.
High altitude smoke in the NASA GISS GCM
NASA Technical Reports Server (NTRS)
Field, Robert; Luo, M.; Fromm, M.; Voulgarakis, A.; Mangeon, S.; Worden, J.
2015-01-01
High altitude smoke-plumes from large, explosive fires were discovered in the late 1990sThey can now be observed with unprecedented detail from space-borne instruments with high vertical resolution in the UTLS such as CALIOP, MLS and ACE. These events inject large quantities of pollutants into a relatively clean and dry environment They serve as unique natural experiments with which to understand, using chemical transport and composition-climate models, the chemical and radiative impacts of long-lived biomass burning emissions. We are currently studying the Black Saturday bushfires in Australia during February 2009
Analysis of Systems Hardware Flown on LDEF-Results of the Systems Special Investigation Group
1992-04-01
applied, should bring calculations and data into closer agreement. A few dosimeters were placed on LDEF at shallow enough shielding locations to...SHIELDING THICKNESS (g/cm2) Radiation absorbed dose (RAD) measurements with thermoluminescent dosimeters (TLD) from leading and trailing sides of LDEF...oxide In^ OsL aluminum oxide, Au plated Al [2024-T351], Au plated Al [6003] Au on Si02, Ir on Si02, Nb on Si02, Os on Si02, Pt on Si02, Cu on Si02, Ag
NASA Astrophysics Data System (ADS)
Cohen-Solal, E.; Le Treut, H.
We describe the initial bias of the climate simulated by a coupled ocean-atmosphere model. The atmospheric component is a state-of-the-art atmospheric general circulation model, whereas the ocean component is limited to the upper ocean and includes a mixed layer whose depth is computed by the model. As the full ocean general circulation is not computed by the model, the heat transport within the ocean is prescribed. When modifying the prescribed heat transport we also affect the initial drift of the model. We analyze here one of the experiments where this drift is very strong, in order to study the key processes relating the changes in the ocean transport and the evolution of the model's climate. In this simulation, the ocean surface temperature cools by 1.5°C in 20 y. We can distinguish two different phases. During the first period of 5 y, the sea surface temperatures become cooler, particularly in the intertropical area, but the outgoing longwave radiation at the top-of-the-atmosphere increases very quickly, in particular at the end of the period. An off-line version of the model radiative code enables us to decompose this behaviour into different contributions (cloudiness, specific humidity, air and surface temperatures, surface albedo). This partitioning shows that the longwave radiation evolution is due to a decrease of high level cirrus clouds in the intertropical troposphere. The decrease of the cloud cover also leads to a decrease of the planetary albedo and therefore an increase of the net short wave radiation absorbed by the system. But the dominant factor is the strong destabilization by the longwave cooling, which is able to throw the system out of equilibrium. During the remaining of the simulation (second phase), the cooling induced by the destabilization at the top-of-the-atmosphere is transmitted to the surface by various processes of the climate system. Hence, we show that small variations of ocean heat transport can force the model from a stable to an unstable state via atmospheric processes which arise wen the tropics are cooling. Even if possibly overestimated by our GCM, this mechanism may be pertinent to the maintenance of present climatic conditions in the tropics. The simplifications inherent in our model's design allow us to investigate the mechanism in some detail.
Burn, K W; Daffara, C; Gualdrini, G; Pierantoni, M; Ferrari, P
2007-01-01
The question of Monte Carlo simulation of radiation transport in voxel geometries is addressed. Patched versions of the MCNP and MCNPX codes are developed aimed at transporting radiation both in the standard geometry mode and in the voxel geometry treatment. The patched code reads an unformatted FORTRAN file derived from DICOM format data and uses special subroutines to handle voxel-to-voxel radiation transport. The various phases of the development of the methodology are discussed together with the new input options. Examples are given of employment of the code in internal and external dosimetry and comparisons with results from other groups are reported.
The quasi 2 day wave response in TIME-GCM nudged with NOGAPS-ALPHA
NASA Astrophysics Data System (ADS)
Wang, Jack C.; Chang, Loren C.; Yue, Jia; Wang, Wenbin; Siskind, D. E.
2017-05-01
The quasi 2 day wave (QTDW) is a traveling planetary wave that can be enhanced rapidly to large amplitudes in the mesosphere and lower thermosphere (MLT) region during the northern winter postsolstice period. In this study, we present five case studies of QTDW events during January and February 2005, 2006 and 2008-2010 by using the Thermosphere-Ionosphere-Mesosphere Electrodynamics-General Circulation Model (TIME-GCM) nudged with the Navy Operational Global Atmospheric Prediction System-Advanced Level Physics High Altitude (NOGAPS-ALPHA) Weather Forecast Model. With NOGAPS-ALPHA introducing more realistic lower atmospheric forcing in TIME-GCM, the QTDW events have successfully been reproduced in the TIME-GCM. The nudged TIME-GCM simulations show good agreement in zonal mean state with the NOGAPS-ALPHA 6 h reanalysis data and the horizontal wind model below the mesopause; however, it has large discrepancies in the tropics above the mesopause. The zonal mean zonal wind in the mesosphere has sharp vertical gradients in the nudged TIME-GCM. The results suggest that the parameterized gravity wave forcing may need to be retuned in the assimilative TIME-GCM.
Anomalies of the Asian Monsoon Induced by Aerosol Forcings
NASA Technical Reports Server (NTRS)
Lau, William K. M.; Kim, M. K.
2004-01-01
Impacts of aerosols on the Asian summer monsoon are studied using the NASA finite volume General Circulation Model (fvGCM), with radiative forcing derived from three-dimensional distributions of five aerosol species i.e., black carbon, organic carbon, soil dust, and sea salt from the Goddard Chemistry Aerosol Radiation and Transport Model (GOCART). Results show that absorbing aerosols, i.e., black carbon and dust, induce large-scale upper-level heating anomaly over the Tibetan Plateau in April and May, ushering in & early onset of the Indian summer monsoon. Absorbing aerosols also I i enhance lower-level heating and anomalous ascent over northern India, intensifying the Indian monsoon. Overall, the aerosol-induced large-scale surface' temperature cooling leads to a reduction of monsoon rainfall over the East Asia continent, and adjacent oceanic regions.
NASA Technical Reports Server (NTRS)
Pierce, R. B.; Remsberg, Ellis E.; Fairlie, T. D.; Blackshear, W. T.; Grose, William L.; Turner, Richard E.
1992-01-01
Lagrangian area diagnostics and trajectory techniques are used to investigate the radiative and dynamical characteristics of a spontaneous sudden warming which occurred during a 2-yr Langley Research Center model simulation. The ability of the Langley Research Center GCM to simulate the major features of the stratospheric circulation during such highly disturbed periods is illustrated by comparison of the simulated warming to the observed circulation during the LIMS observation period. The apparent sink of vortex area associated with Rossby wave-breaking accounts for the majority of the reduction of the size of the vortex and also acts to offset the radiatively driven increase in the area occupied by the 'surf zone'. Trajectory analysis of selected material lines substantiates the conclusions from the area diagnostics.
Methods of treating complex space vehicle geometry for charged particle radiation transport
NASA Technical Reports Server (NTRS)
Hill, C. W.
1973-01-01
Current methods of treating complex geometry models for space radiation transport calculations are reviewed. The geometric techniques used in three computer codes are outlined. Evaluations of geometric capability and speed are provided for these codes. Although no code development work is included several suggestions for significantly improving complex geometry codes are offered.
Experimental check of bremsstrahlung dosimetry predictions for 0.75 MeV electrons
NASA Astrophysics Data System (ADS)
Sanford, T. W. L.; Halbleib, J. A.; Beezhold, W.
Bremsstrahlung dose in CaF2 TLDs from the radiation produced by 0.75 MeV electrons incident on Ta/C targets is measured and compared with that calculated via the CYLTRAN Monte Carlo code. The comparison was made to validate the code, which is used to predict and analyze radiation environments of flash X-ray simulators measured by TLDs. Over a wide range of Ta target thicknesses and radiation angles the code is found to agree with the 5% measurements. For Ta thickness near those that optimize the radiation output, however, the code overestimates the radiation dose at small angles. Maximum overprediction is about 14 + or - 5%. The general agreement, nonetheless, gives confidence in using the code at this energy and in the TLD calibration procedure. For the bulk of the measurements, a standard TLD employing a 2.2 mm thick Al equilibrator was used. In this paper we also show that this thickness can significantly attenuate the free-field dose and introduces significant photon buildup in the equalibrator.
Water Isotopes in the GISS GCM: History, Applications and Potential
NASA Astrophysics Data System (ADS)
Schmidt, G. A.; LeGrande, A. N.; Field, R. D.; Nusbaumer, J. M.
2017-12-01
Water isotopes have been incorporated in the GISS GCMs since the pioneering work of Jean Jouzel in the 1980s. Since 2005, this functionality has been maintained within the master branch of the development code and has been usable (and used) in all subsequent versions. This has allowed a wide variety of applications, across multiple time-scales and interests, to be tackled coherently. Water isotope tracers have been used to debug the atmospheric model code, tune parameterisations of moist processes, assess the isotopic fingerprints of multiple climate drivers, produce forward models for remotely sensed isotope products, and validate paleo-climate interpretations from the last millennium to the Eocene. We will present an overview of recent results involving isotope tracers, including improvements in models for the isotopic fractionation processes themselves, and demonstrate the potential for using these tracers and models more systematically in paleo-climate reconstructions and investigations of the modern hydrological cycle.
Liquid slip over gas nanofilms
NASA Astrophysics Data System (ADS)
Ramisetti, Srinivasa B.; Borg, Matthew K.; Lockerby, Duncan A.; Reese, Jason M.
2017-08-01
We propose the rarefied-gas-cushion model (r-GCM), as an extended version of the gas-cushion model (GCM), to estimate the apparent slip of water flowing over a gas layer trapped at a solid surface. Nanobubbles or gas nanofilms may manifest rarefied-gas effects and the r-GCM incorporates kinetic boundary conditions for the gas component in the slip Knudsen regime. These enable an apparent hydrodynamic slip length to be calculated given the gas thickness, the Knudsen number, and the bulk fluid viscosities. We assess the r-GCM through nonequilibrium molecular dynamics (NEMD) simulations of shear-driven liquid flow over an infinite gas nanofilm covering a solid surface, from the gas slip regime to the early transition regime, beyond which NEMD is computationally impractical. We find that, over the flow regimes examined, the r-GCM provides better predictions of the apparent liquid slip and retrieves both the GCM and the free-molecular behavior in the appropriate limits.
The use of the SRIM code for calculation of radiation damage induced by neutrons
NASA Astrophysics Data System (ADS)
Mohammadi, A.; Hamidi, S.; Asadabad, Mohsen Asadi
2017-12-01
Materials subjected to neutron irradiation will being evolve to structural changes by the displacement cascades initiated by nuclear reaction. This study discusses a methodology to compute primary knock-on atoms or PKAs information that lead to radiation damage. A program AMTRACK has been developed for assessing of the PKAs information. This software determines the specifications of recoil atoms (using PTRAC card of MCNPX code) and also the kinematics of interactions. The deterministic method was used for verification of the results of (MCNPX+AMTRACK). The SRIM (formely TRIM) code is capable to compute neutron radiation damage. The PKAs information was extracted by AMTRACK program, which can be used as an input of SRIM codes for systematic analysis of primary radiation damage. Then the Bushehr Nuclear Power Plant (BNPP) radiation damage on reactor pressure vessel is calculated.
The Continual Intercomparison of Radiation Codes: Results from Phase I
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Cole, Jason; Iacono, Michael; Jin, Zhonghai; Li, Jiangnan; Manners, James; Raisanen, Petri;
2011-01-01
The computer codes that calculate the energy budget of solar and thermal radiation in Global Climate Models (GCMs), our most advanced tools for predicting climate change, have to be computationally efficient in order to not impose undue computational burden to climate simulations. By using approximations to gain execution speed, these codes sacrifice accuracy compared to more accurate, but also much slower, alternatives. International efforts to evaluate the approximate schemes have taken place in the past, but they have suffered from the drawback that the accurate standards were not validated themselves for performance. The manuscript summarizes the main results of the first phase of an effort called "Continual Intercomparison of Radiation Codes" (CIRC) where the cases chosen to evaluate the approximate models are based on observations and where we have ensured that the accurate models perform well when compared to solar and thermal radiation measurements. The effort is endorsed by international organizations such as the GEWEX Radiation Panel and the International Radiation Commission and has a dedicated website (i.e., http://circ.gsfc.nasa.gov) where interested scientists can freely download data and obtain more information about the effort's modus operandi and objectives. In a paper published in the March 2010 issue of the Bulletin of the American Meteorological Society only a brief overview of CIRC was provided with some sample results. In this paper the analysis of submissions of 11 solar and 13 thermal infrared codes relative to accurate reference calculations obtained by so-called "line-by-line" radiation codes is much more detailed. We demonstrate that, while performance of the approximate codes continues to improve, significant issues still remain to be addressed for satisfactory performance within GCMs. We hope that by identifying and quantifying shortcomings, the paper will help establish performance standards to objectively assess radiation code quality, and will guide the development of future phases of CIRC
Using a GCM analogue model to investigate the potential for Amazonian forest dieback
NASA Astrophysics Data System (ADS)
Huntingford, C.; Harris, P. P.; Gedney, N.; Cox, P. M.; Betts, R. A.; Marengo, J. A.; Gash, J. H. C.
A combined GCM analogue model and GCM land surface representation is used to investigate the influences of climatology and land surface parameterisation on modelled Amazonian vegetation change. This modelling structure (called IMOGEN) captures the main features of the changes in surface climate as estimated by a GCM with enhanced atmospheric greenhouse gas concentrations. Advantage is taken of IMOGEN's computational speed which allows multiple simulations to be carried out to assess the robustness of the GCM results. The timing of forest dieback is found to be sensitive to the initial ``pre-industrial'' climate, as well as uncertainties in the representation of land-atmosphere CO2 exchange. Changing from a Q10 form for plant dark and maintanence respiration (as used in the coupled GCM runs) to a respiration proportional to maximum photosynthesis, reduces the biomass lost from Amazonia in the 21st century. Replacing the GCM control climate (which has about 25% too little rain in the annual mean over Amazonia) with an observed climatology increases the CO2 concentration at which rainfall drops to critical levels, and thereby further delays the dieback. On the other hand, calibration of the canopy photosynthesis model against Amazonian flux data tends to lead to earlier forest dieback. Further advances are required in both GCM rainfall simulation and land-surface process representation before a clearer picture will emerge on the timing of possible Amazonian forest dieback. However, it seems likely that these advances will overall lead to projections of later forest dieback as GCM control climates become more realistic.
Method for calculating internal radiation and ventilation with the ADINAT heat-flow code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butkovich, T.R.; Montan, D.N.
1980-04-01
One objective of the spent fuel test in Climax Stock granite (SFTC) is to correctly model the thermal transport, and the changes in the stress field and accompanying displacements from the application of the thermal loads. We have chosen the ADINA and ADINAT finite element codes to do these calculations. ADINAT is a heat transfer code compatible to the ADINA displacement and stress analysis code. The heat flow problem encountered at SFTC requires a code with conduction, radiation, and ventilation capabilities, which the present version of ADINAT does not have. We have devised a method for calculating internal radiation andmore » ventilation with the ADINAT code. This method effectively reproduces the results from the TRUMP multi-dimensional finite difference code, which correctly models radiative heat transport between drift surfaces, conductive and convective thermal transport to and through air in the drifts, and mass flow of air in the drifts. The temperature histories for each node in the finite element mesh calculated with ADINAT using this method can be used directly in the ADINA thermal-mechanical calculation.« less
Simulating X-ray bursts with a radiation hydrodynamics code
NASA Astrophysics Data System (ADS)
Seong, Gwangeon; Kwak, Kyujin
2018-04-01
Previous simulations of X-ray bursts (XRBs), for example, those performed by MESA (Modules for Experiments in Stellar Astrophysics) could not address the dynamical effects of strong radiation, which are important to explain the photospheric radius expansion (PRE) phenomena seen in many XRBs. In order to study the effects of strong radiation, we propose to use SNEC (the SuperNova Explosion Code), a 1D Lagrangian open source code that is designed to solve hydrodynamics and equilibrium-diffusion radiation transport together. Because SNEC is able to control modules of radiation-hydrodynamics for properly mapped inputs, radiation-dominant pressure occurring in PRE XRBs can be handled. Here we present simulation models for PRE XRBs by applying SNEC together with MESA.
Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study.
Kotchenova, Svetlana Y; Vermote, Eric F; Levy, Robert; Lyapustin, Alexei
2008-05-01
Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.
Radiative transfer codes for atmospheric correction and aerosol retrieval: intercomparison study
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.; Levy, Robert; Lyapustin, Alexei
2008-05-01
Results are summarized for a scientific project devoted to the comparison of four atmospheric radiative transfer codes incorporated into different satellite data processing algorithms, namely, 6SV1.1 (second simulation of a satellite signal in the solar spectrum, vector, version 1.1), RT3 (radiative transfer), MODTRAN (moderate resolution atmospheric transmittance and radiance code), and SHARM (spherical harmonics). The performance of the codes is tested against well-known benchmarks, such as Coulson's tabulated values and a Monte Carlo code. The influence of revealed differences on aerosol optical thickness and surface reflectance retrieval is estimated theoretically by using a simple mathematical approach. All information about the project can be found at http://rtcodes.ltdri.org.
CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van der Holst, B.; Toth, G.; Sokolov, I. V.
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1)more » an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.« less
NASA Technical Reports Server (NTRS)
Adams, Thomas; VanBaalen, Mary
2009-01-01
The Radiation Health Office (RHO) determines each astronaut s cancer risk by using models to associate the amount of radiation dose that astronauts receive from spaceflight missions. The baryon transport codes (BRYNTRN), high charge (Z) and energy transport codes (HZETRN), and computer risk models are used to determine the effective dose received by astronauts in Low Earth orbit (LEO). This code uses an approximation of the Boltzman transport formula. The purpose of the project is to run this code for various International Space Station (ISS) flight parameters in order to gain a better understanding of how this code responds to different scenarios. The project will determine how variations in one set of parameters such as, the point of the solar cycle and altitude can affect the radiation exposure of astronauts during ISS missions. This project will benefit NASA by improving mission dosimetry.
SPAMCART: a code for smoothed particle Monte Carlo radiative transfer
NASA Astrophysics Data System (ADS)
Lomax, O.; Whitworth, A. P.
2016-10-01
We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.
NASA Technical Reports Server (NTRS)
Lau, K. M.; Kim, K. M.; Sud, Y. C.; Walker, G. K.
2009-01-01
The responses of the atmospheric water cycle and climate of West Africa and the Atlantic to radiative forcing of Saharan dust are studied using the NASA finite volume general circulation model (fvGCM), coupled to a mixed layer ocean. We find evidence of an "elevated heat pump" (EHP) mechanism that underlines the responses of the atmospheric water cycle to dust forcing as follow. During the boreal summerr, as a result of large-scale atmospheric feedback triggered by absorbing dust aerosols, rainfall and cloudiness are ehanIed over the West Africa/Eastern Atlantic ITCZ, and suppressed over the West Atlantic and Caribbean region. Shortwave radiation absorption by dust warms the atmosphere and cools the surface, while longwave has the opposite response. The elevated dust layer warms the air over West Africa and the eastern Atlantic. As the warm air rises, it spawns a large-scale onshore flow carrying the moist air from the eastern Atlantic and the Gulf of Guinea. The onshore flow in turn enhances the deep convection over West Africa land, and the eastern Atlantic. The condensation heating associated with the ensuing deep convection drives and maintains an anomalous large-scale east-west overturning circulation with rising motion over West Africa/eastern Atlantic, and sinking motion over the Caribbean region. The response also includes a strengthening of the West African monsoon, manifested in a northward shift of the West Africa precipitation over land, increased low-level westerlies flow over West Africa at the southern edge of the dust layer, and a near surface westerly jet underneath the dust layer overr the Sahara. The dust radiative forcing also leads to significant changes in surface energy fluxes, resulting in cooling of the West African land and the eastern Atlantic, and warming in the West Atlantic and Caribbean. The EHP effect is most effective for moderate to highly absorbing dusts, and becomes minimized for reflecting dust with single scattering albedo at0.95 or higher.
Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E
2013-10-21
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
NASA Astrophysics Data System (ADS)
Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.
2013-10-01
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2001-01-01
The purpose of this report was to analyze the heat-transfer problem posed by the determination of spacecraft temperatures and to incorporate the theoretically derived relationships in the computational code TSCALC. The basis for the code was a theoretical analysis of the thermal radiative equilibrium in space, particularly in the Solar System. Beginning with the solar luminosity, the code takes into account these key variables: (1) the spacecraft-to-Sun distance expressed in astronomical units (AU), where 1 AU represents the average Sun-to-Earth distance of 149.6 million km; (2) the angle (arc degrees) at which solar radiation is incident upon a spacecraft surface (ILUMANG); (3) the spacecraft surface temperature (a radiator or photovoltaic array) in kelvin, the surface absorptivity-to-emissivity ratio alpha/epsilon with respect to the solar radiation and (alpha/epsilon)(sub 2) with respect to planetary radiation; and (4) the surface view factor to space F. Outputs from the code have been used to determine environmental temperatures in various Earth orbits. The code was also utilized as a subprogram in the design of power system radiators for deep-space probes.
A simple code for use in shielding and radiation dosage analyses
NASA Technical Reports Server (NTRS)
Wan, C. C.
1972-01-01
A simple code for use in analyses of gamma radiation effects in laminated materials is described. Simple and good geometry is assumed so that all multiple collision and scattering events are excluded from consideration. The code is capable of handling laminates up to six layers. However, for laminates of more than six layers, the same code may be used to incorporate two additional layers at a time, making use of punch-tape outputs from previous computation on all preceding layers. Spectrum of attenuated radiation are obtained as both printed output and punch tape output as desired.
GUI to Facilitate Research on Biological Damage from Radiation
NASA Technical Reports Server (NTRS)
Cucinotta, Frances A.; Ponomarev, Artem Lvovich
2010-01-01
A graphical-user-interface (GUI) computer program has been developed to facilitate research on the damage caused by highly energetic particles and photons impinging on living organisms. The program brings together, into one computational workspace, computer codes that have been developed over the years, plus codes that will be developed during the foreseeable future, to address diverse aspects of radiation damage. These include codes that implement radiation-track models, codes for biophysical models of breakage of deoxyribonucleic acid (DNA) by radiation, pattern-recognition programs for extracting quantitative information from biological assays, and image-processing programs that aid visualization of DNA breaks. The radiation-track models are based on transport models of interactions of radiation with matter and solution of the Boltzmann transport equation by use of both theoretical and numerical models. The biophysical models of breakage of DNA by radiation include biopolymer coarse-grained and atomistic models of DNA, stochastic- process models of deposition of energy, and Markov-based probabilistic models of placement of double-strand breaks in DNA. The program is designed for use in the NT, 95, 98, 2000, ME, and XP variants of the Windows operating system.
NASA Technical Reports Server (NTRS)
Suarez, Max J. (Editor); Takacs, Lawrence L.; Molod, Andrea; Wang, Tina
1994-01-01
This technical report documents Version 1 of the Goddard Earth Observing System (GEOS) General Circulation Model (GCM). The GEOS-1 GCM is being used by NASA's Data Assimilation Office (DAO) to produce multiyear data sets for climate research. This report provides a documentation of the model components used in the GEOS-1 GCM, a complete description of model diagnostics available, and a User's Guide to facilitate GEOS-1 GCM experiments.
Long-wave radiative forcing due to desert dust
NASA Astrophysics Data System (ADS)
Gunn, L. N.; Collins, W.
2011-12-01
Radiative forcing due to aerosols has been identified by the IPCC as a major contributor to the total radiative forcing uncertainty budget. Optically thick plumes of dust and pollutants extending out from Africa and Asia can be lifted into the middle troposphere and often are transported over synoptic length scales. These events can decrease the upwelling long-wave fluxes at the top of the atmosphere, especially in the mid-infrared "window". Although the long-wave effects of dust are included in model simulations, they are hard to validate in the absence of satellite-driven global estimates. Using hyper spectral satellite measurements (from NASA's AIRS instrument) it is possible to estimate the effect of dust on the outgoing long-wave radiation directly from the measured spectra, by differencing the simulated clear sky radiance spectra (which are calculated using ECMWF analysis) and the observed dust filled radiance spectra (observations from AIRS). We will summarize this method and show global estimates of the dust radiative effect in the long-wave. These global estimates will be used to validate GCM model output and help us to improve our understanding of dust in the global energy budget.
Performance of the Goddard Multiscale Modeling Framework with Goddard Ice Microphysical Schemes
NASA Technical Reports Server (NTRS)
Chern, Jiun-Dar; Tao, Wei-Kuo; Lang, Stephen E.; Matsui, Toshihisa; Li, J.-L.; Mohr, Karen I.; Skofronick-Jackson, Gail M.; Peters-Lidard, Christa D.
2016-01-01
The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has become a new approach for climate modeling. The embedded CRMs make it possible to apply CRM-based cloud microphysics directly within a GCM. However, most such schemes have never been tested in a global environment for long-term climate simulation. The benefits of using an MMF to evaluate rigorously and improve microphysics schemes are here demonstrated. Four one-moment microphysical schemes are implemented into the Goddard MMF and their results validated against three CloudSat/CALIPSO cloud ice products and other satellite data. The new four-class (cloud ice, snow, graupel, and frozen drops/hail) ice scheme produces a better overall spatial distribution of cloud ice amount, total cloud fractions, net radiation, and total cloud radiative forcing than earlier three-class ice schemes, with biases within the observational uncertainties. Sensitivity experiments are conducted to examine the impact of recently upgraded microphysical processes on global hydrometeor distributions. Five processes dominate the global distributions of cloud ice and snow amount in long-term simulations: (1) allowing for ice supersaturation in the saturation adjustment, (2) three additional correction terms in the depositional growth of cloud ice to snow, (3) accounting for cloud ice fall speeds, (4) limiting cloud ice particle size, and (5) new size-mapping schemes for snow and graupel. Despite the cloud microphysics improvements, systematic errors associated with subgrid processes, cyclic lateral boundaries in the embedded CRMs, and momentum transport remain and will require future improvement.
NASA Astrophysics Data System (ADS)
Lefèvre, Maxence; Spiga, Aymeric; Lebonnois, Sébastien
2017-01-01
The impact of the cloud convective layer of the atmosphere of Venus on the global circulation remains unclear. The recent observations of gravity waves at the top of the cloud by the Venus Express mission provided some answers. These waves are not resolved at the scale of global circulation models (GCM); therefore, we developed an unprecedented 3-D turbulence-resolving large-eddy simulations (LES) Venusian model using the Weather Research and Forecast terrestrial model. The forcing consists of three different heating rates: two radiative ones for solar and infrared and one associated with the adiabatic cooling/warming of the global circulation. The rates are extracted from the Laboratoire de Météorlogie Dynamique Venus GCM using two different cloud models. Thus, we are able to characterize the convection and associated gravity waves in function of latitude and local time. To assess the impact of the global circulation on the convective layer, we used rates from a 1-D radiative-convective model. The resolved layer, taking place between 1.0 × 105 and 3.8 × 104 Pa (48-53 km), is organized as polygonal closed cells of about 10 km wide with vertical wind of several meters per second. The convection emits gravity waves both above and below the convective layer leading to temperature perturbations of several tenths of kelvin with vertical wavelength between 1 and 3 km and horizontal wavelength from 1 to 10 km. The thickness of the convective layer and the amplitudes of waves are consistent with observations, though slightly underestimated. The global dynamics heating greatly modify the convective layer.
NASA Astrophysics Data System (ADS)
Millar, R.; Ingram, W.; Allen, M. R.; Lowe, J.
2013-12-01
Temperature and precipitation patterns are the climate variables with the greatest impacts on both natural and human systems. Due to the small spatial scales and the many interactions involved in the global hydrological cycle, in general circulation models (GCMs) representations of precipitation changes are subject to considerable uncertainty. Quantifying and understanding the causes of uncertainty (and identifying robust features of predictions) in both global and local precipitation change is an essential challenge of climate science. We have used the huge distributed computing capacity of the climateprediction.net citizen science project to examine parametric uncertainty in an ensemble of 20,000 perturbed-physics versions of the HadCM3 general circulation model. The ensemble has been selected to have a control climate in top-of-atmosphere energy balance [Yamazaki et al. 2013, J.G.R.]. We force this ensemble with several idealised climate-forcing scenarios including carbon dioxide step and transient profiles, solar radiation management geoengineering experiments with stratospheric aerosols, and short-lived climate forcing agents. We will present the results from several of these forcing scenarios under GCM parametric uncertainty. We examine the global mean precipitation energy budget to understand the robustness of a simple non-linear global precipitation model [Good et al. 2012, Clim. Dyn.] as a better explanation of precipitation changes in transient climate projections under GCM parametric uncertainty than a simple linear tropospheric energy balance model. We will also present work investigating robust conclusions about precipitation changes in a balanced ensemble of idealised solar radiation management scenarios [Kravitz et al. 2011, Atmos. Sci. Let.].
Global Climate Model Simulated Hydrologic Droughts and Floods in the Nelson-Churchill Watershed
NASA Astrophysics Data System (ADS)
Vieira, M. J. F.; Stadnyk, T. A.; Koenig, K. A.
2014-12-01
There is uncertainty surrounding the duration, magnitude and frequency of historical hydroclimatic extremes such as hydrologic droughts and floods prior to the observed record. In regions where paleoclimatic studies are less reliable, Global Climate Models (GCMs) can provide useful information about past hydroclimatic conditions. This study evaluates the use of Coupled Model Intercomparison Project 5 (CMIP5) GCMs to enhance the understanding of historical droughts and floods across the Canadian Prairie region in the Nelson-Churchill Watershed (NCW). The NCW is approximately 1.4 million km2 in size and drains into Hudson Bay in Northern Manitoba, Canada. One hundred years of observed hydrologic records show extended dry and wet periods in this region; however paleoclimatic studies suggest that longer, more severe droughts have occurred in the past. In Manitoba, where hydropower is the primary source of electricity, droughts are of particular interest as they are important for future resource planning. Twenty-three GCMs with daily runoff are evaluated using 16 metrics for skill in reproducing historic annual runoff patterns. A common 56-year historic period of 1950-2005 is used for this evaluation to capture wet and dry periods. GCM runoff is then routed at a grid resolution of 0.25° using the WATFLOOD hydrological model storage-routing algorithm to develop streamflow scenarios. Reservoir operation is naturalized and a consistent temperature scenario is used to determine ice-on and ice-off conditions. These streamflow simulations are compared with the historic record to remove bias using quantile mapping of empirical distribution functions. GCM runoff data from pre-industrial and future projection experiments are also bias corrected to obtain extended streamflow simulations. GCM streamflow simulations of more than 650 years include a stationary (pre-industrial) period and future periods forced by radiative forcing scenarios. Quantile mapping adjusts for magnitude only while maintaining the GCM's sequencing of events, allowing for the examination of differences in historic and future hydroclimatic extremes. These bias corrected streamflow scenarios provide an alternative to stochastic simulations for hydrologic data analysis and can aid future resource planning and environmental studies.
Cheong, Mei-Leng; Wang, Liang-Jie; Chuang, Pei-Yun; Chang, Ching-Wen; Lee, Yun-Shien; Lo, Hsiao-Fan; Tsai, Ming-Song
2015-01-01
Human chorionic gonadotropin (hCG) is composed of a common α subunit and a placenta-specific β subunit. Importantly, hCG is highly expressed in the differentiated and multinucleated syncytiotrophoblast, which is formed via trophoblast cell fusion and stimulated by cyclic AMP (cAMP). Although the ubiquitous activating protein 2 (AP2) transcription factors TFAP2A and TFAP2C may regulate hCGβ expression, it remains unclear how cAMP stimulates placenta-specific hCGβ gene expression and trophoblastic differentiation. Here we demonstrated that the placental transcription factor glial cells missing 1 (GCM1) binds to a highly conserved promoter region in all six hCGβ paralogues by chromatin immunoprecipitation-on-chip (ChIP-chip) analyses. We further showed that cAMP stimulates GCM1 and the CBP coactivator to activate the hCGβ promoter through a GCM1-binding site (GBS1), which also constitutes a previously identified AP2 site. Given that TFAP2C may compete with GCM1 for GBS1, cAMP enhances the association between the hCGβ promoter and GCM1 but not TFAP2C. Indeed, the hCG-cAMP-protein kinase A (PKA) signaling pathway also stimulates Ser269 and Ser275 phosphorylation of GCM1, which recruits CBP to mediate GCM1 acetylation and stabilization. Consequently, hCG stimulates the expression of GCM1 target genes, including the fusogenic protein syncytin-1, to promote placental cell fusion. Our study reveals a positive feedback loop between GCM1 and hCG regulating placental hCGβ expression and cell differentiation. PMID:26503785
Evaluation of Transport in the Lower Tropical Stratosphere in a Global Chemistry and Transport Model
NASA Technical Reports Server (NTRS)
Douglass, Anne R.; Schoeberl, Mark R.; Rood, Richard B.; Pawson, Steven; Bhartia, P. K. (Technical Monitor)
2002-01-01
Off-line models of the evolution of stratospheric constituents use meteorological information from a general circulation model (GCM) or from a data assimilation system (DAS). Here we focus on transport in the tropics and between the tropics and middle latitudes. Constituent fields from two simulations are compared with each other and with observations. One simulation uses winds from a GCM and the second uses winds from a DAS that has the same GCM at its core. Comparisons of results from the two simulations with observations from satellite, aircraft, and sondes are used to judge the realism of the tropical transport. Faithful comparisons between simulated fields and observations for O3, CH4, and the age-of-air are found for the simulation using the GCM fields. The same comparisons for the simulation using DAS fields show rapid upward tropical transport and excessive mixing between the tropics and middle latitudes. The unrealistic transport found in the DAS fields may be due to the failure of the GCM used in the assimilation system to represent the quasi-biennial oscillation. The assimilation system accounts for differences between the observations and the GCM by requiring implicit forcing to produce consistency between the GCM and observations. These comparisons suggest that the physical consistency of the GCM fields is more important to transport characteristics in the lower tropical stratosphere than the elimination bias with respect to meteorological observations that is accomplished by the DAS. The comparisons presented here show that GCM fields are more appropriate for long-term calculations to assess the impact of changes in stratospheric composition because the balance between photochemical and transport terms is likely to be represented correctly.
Radiation Transport Tools for Space Applications: A Review
NASA Technical Reports Server (NTRS)
Jun, Insoo; Evans, Robin; Cherng, Michael; Kang, Shawn
2008-01-01
This slide presentation contains a brief discussion of nuclear transport codes widely used in the space radiation community for shielding and scientific analyses. Seven radiation transport codes that are addressed. The two general methods (i.e., Monte Carlo Method, and the Deterministic Method) are briefly reviewed.
Transport calculations and accelerator experiments needed for radiation risk assessment in space.
Sihver, Lembit
2008-01-01
The major uncertainties on space radiation risk estimates in humans are associated to the poor knowledge of the biological effects of low and high LET radiation, with a smaller contribution coming from the characterization of space radiation field and its primary interactions with the shielding and the human body. However, to decrease the uncertainties on the biological effects and increase the accuracy of the risk coefficients for charged particles radiation, the initial charged-particle spectra from the Galactic Cosmic Rays (GCRs) and the Solar Particle Events (SPEs), and the radiation transport through the shielding material of the space vehicle and the human body, must be better estimated Since it is practically impossible to measure all primary and secondary particles from all possible position-projectile-target-energy combinations needed for a correct risk assessment in space, accurate particle and heavy ion transport codes must be used. These codes are also needed when estimating the risk for radiation induced failures in advanced microelectronics, such as single-event effects, etc., and the efficiency of different shielding materials. It is therefore important that the models and transport codes will be carefully benchmarked and validated to make sure they fulfill preset accuracy criteria, e.g. to be able to predict particle fluence, dose and energy distributions within a certain accuracy. When validating the accuracy of the transport codes, both space and ground based accelerator experiments are needed The efficiency of passive shielding and protection of electronic devices should also be tested in accelerator experiments and compared to simulations using different transport codes. In this paper different multipurpose particle and heavy ion transport codes will be presented, different concepts of shielding and protection discussed, as well as future accelerator experiments needed for testing and validating codes and shielding materials.
Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E
2015-01-01
Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.
Irrigation as an Historical Climate Forcing
NASA Technical Reports Server (NTRS)
Cook, Benjamin I.; Shukla, Sonali P.; Puma, Michael J.; Nazarenko, Larissa S.
2014-01-01
Irrigation is the single largest anthropogenic water use, a modification of the land surface that significantly affects surface energy budgets, the water cycle, and climate. Irrigation, however, is typically not included in standard historical general circulation model (GCM) simulations along with other anthropogenic and natural forcings. To investigate the importance of irrigation as an anthropogenic climate forcing, we conduct two 5-member ensemble GCM experiments. Both are setup identical to the historical forced (anthropogenic plus natural) scenario used in version 5 of the Coupled Model Intercomparison Project, but in one experiment we also add water to the land surface using a dataset of historically estimated irrigation rates. Irrigation has a negligible effect on the global average radiative balance at the top of the atmosphere, but causes significant cooling of global average surface air temperatures over land and dampens regional warming trends. This cooling is regionally focused and is especially strong in Western North America, the Mediterranean, the Middle East, and Asia. Irrigation enhances cloud cover and precipitation in these same regions, except for summer in parts of Monsoon Asia, where irrigation causes a reduction in monsoon season precipitation. Irrigation cools the surface, reducing upward fluxes of longwave radiation (increasing net longwave), and increases cloud cover, enhancing shortwave reflection (reducing net shortwave). The relative magnitude of these two processes causes regional increases (northern India) or decreases (Central Asia, China) in energy availability at the surface and top of the atmosphere. Despite these changes in net radiation, however, climate responses are due primarily to larger magnitude shifts in the Bowen ratio from sensible to latent heating. Irrigation impacts on temperature, precipitation, and other climate variables are regionally significant, even while other anthropogenic forcings (anthropogenic aerosols, greenhouse gases, etc.) dominate the long term climate evolution in the simulations. To better constrain the magnitude and uncertainties of irrigation-forced climate anomalies, irrigation should therefore be considered as another important anthropogenic climate forcing in the next generation of historical climate simulations and multimodel assessments.
Solar cycle variations of MIR radiation environment as observed by the LIULIN dosimeter.
Dachev TsP; Tomov, B T; Matviichuk YuN; Koleva, R T; Semkova, J V; Petrov, V M; Benghin, V V; Ivanov YuV; Shurshakov, V A; Lemaire, J F
1999-06-01
Measurements on board the MIR space station by the Bulgarian-Russian dosimeter LIULIN have been used to study the solar cycle variations of the radiation environment. The fixed locations of the instrument in the MIR manned compartment behind 6-15 g/cm2 of shielding have given homogeneous series of particle fluxes and doses measurements to be collected during the declining phase of 22nd solar cycle between September 1989 and April 1994. During the declining phase of 22nd solar cycle the GCR (Galactic Cosmic Rays) flux observed at L>4 (where L is the McIlwain parameter) has enhanced from 0.6-0.7 cm-2 s-1 up to 1.4-1.6 cm-2 s-1. The long-term observations of the trapped radiation can be summarized as follows: the main maximum of the flux and dose rate is located at the southeast side of the geomagnetic field minimum of South Atlantic Anomaly (SAA) at L=1.3-1.4. Protons depositing few (nGy cm2)/particle in the detector predominantly populate this region. At practically the same spatial location and for similar conditions the dose rate rises up from 480 to 1470 microGy/h dose in silicon in the 1990-1994 time interval, during the declining phase of the solar cycle. On the other hand the flux rises from 35 up to 115 cm-2 s-1 for the same period of time. A power law dependence was extracted which predicts that when the total neutral density at the altitude of the station decreases from 8x10(-15) to 6x10(-16) g/cm3 the dose increase from about 200 microGy/h up to 1200 microGy/h. At the same time the flux increase from about 30 cm-2 s-1 up to 120 cm-2 s-1. The AP8 model predictions give only 5.8% increase of the flux for the same conditions.
Overview of Recent Radiation Transport Code Comparisons for Space Applications
NASA Astrophysics Data System (ADS)
Townsend, Lawrence
Recent advances in radiation transport code development for space applications have resulted in various comparisons of code predictions for a variety of scenarios and codes. Comparisons among both Monte Carlo and deterministic codes have been made and published by vari-ous groups and collaborations, including comparisons involving, but not limited to HZETRN, HETC-HEDS, FLUKA, GEANT, PHITS, and MCNPX. In this work, an overview of recent code prediction inter-comparisons, including comparisons to available experimental data, is presented and discussed, with emphases on those areas of agreement and disagreement among the various code predictions and published data.
NASA Technical Reports Server (NTRS)
Tselioudis, George; Douvis, Costas; Zerefos, Christos
2012-01-01
Current climate and future climate-warming runs with the RegCM Regional Climate Model (RCM) at 50 and 11 km-resolutions forced by the ECHAM GCM are used to examine whether the increased resolution of the RCM introduces novel information in the precipitation field when the models are run for the mountainous region of the Hellenic peninsula. The model results are inter-compared with the resolution of the RCM output degraded to match that of the GCM, and it is found that in both the present and future climate runs the regional models produce more precipitation than the forcing GCM. At the same time, the RCM runs produce increases in precipitation with climate warming even though they are forced with a GCM that shows no precipitation change in the region. The additional precipitation is mostly concentrated over the mountain ranges, where orographic precipitation formation is expected to be a dominant mechanism. It is found that, when examined at the same resolution, the elevation heights of the GCM are lower than those of the averaged RCM in the areas of the main mountain ranges. It is also found that the majority of the difference in precipitation between the RCM and the GCM can be explained by their difference in topographic height. The study results indicate that, in complex topography regions, GCM predictions of precipitation change with climate warming may be dry biased due to the GCM smoothing of the regional topography.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-30
... radiation therapy (IORT), brachytherapy composite APC, multiple imaging composite APCs, cardiac... Outpatient Code Editor IOL Intraocular lens IOM Institute of Medicine IORT Intraoperative radiation treatment... Radiation Therapy (IORT) (APC 0412) a. Background b. CY 2013 Proposals for CPT Codes 77424, 77425, and 77469...
Cheng, Mao-wei; Liu, Jia-fa; Yi, Guo-qin; Zhang, Xi-ping; Zhu, Qing-hua; Liu, Lie-gang
2006-09-01
To compare the effects of soy isoflavone with supplemental calcium to soy isoflavone or Ca alone on preservation of bone mineral density (BMD) and the expression of insulin-like growth factor (IGF)-I. Sprague-Dawley (SD) female Rats, 6 months old, were ovariectomized and randomized into five groups: sham-operated group (n = 10) or ovx (n = 40) group. Shams were fed a 3.272 g/kg Ca diet. Ovx rats were randomized to a 3.272 g/kg Ca diet alone (OVX) or with soy isoflavone (SI) extract (37.95 mg/kg.bw) or to a supplemental Ca diet (Ca, 4.676 g/kg) alone or a supplemental Ca diet with the isoflavone extract (SI + Ca) for 12 weeks. BMD of femur was measured by scanner of bone mineral density. The level of IGF-1 mRNA expression was measured by reverse transcriptase-polymerase chain reaction (RT-PCR), respectively. There was no significant difference between group Sham (0.267 +/- 0.008) and group SI + Ca (0.263 +/- 0.007) g/cm(2) (P > 0.05) on femur BMD of distal end. Femur BMD of distal end in group Sham and group SI + Ca was greater (P < 0.05) as compared to group OVX (0.245 +/- 0.005) g/cm(2), SI (0.258 +/- 0.011) g/cm(2) or Ca (0.255 +/- 0.004) g/cm(2), P < 0.05. The liver tissue IGF-1 mRNA contents (IGF-1 cDNA/B-actin cDNA) were significantly decreased in group Sham (0.200 +/- 0.023) g/cm(2), SI (0.278 +/- 0.019) g/cm(2), Ca (0.302 +/- 0.026) g/cm(2) or SI + Ca (0.231 +/- 0.025) g/cm(2) as compared to group OVX (0.362 +/- 0.031) g/cm(2), P < 0.05; The liver tissue IGF-1 mRNA contents (IGF-1 cDNA/B-actin cDNA) were significantly decreased in group SI + Ca (0.231 +/- 0.025) g/cm(2) compared to group SI (0.278 +/- 0.019) g/cm(2) and Ca (0.302 +/- 0.026) g/cm(2), P < 0.05. Soy isoflavones combined with supplemental Ca are more protective against the loss of femur BMD than soy isoflavones or supplemental Ca diet alone. The dose of SI (37.95 mg/kg.bw) might significantly restrain the rising of the liver tissue IGF-1 mRNA contents caused by ovariectomy.
Jansma, P.E.; Snyder, D.B.; Ponce, David A.
1983-01-01
Three gravity profiles and principal facts of 2,604 gravity stations in the southwest quadrant of the Nevada Test Site are documented in this data report. The residual gravity profiles show the gravity measurements and the smoothed curves derived from these points that were used in geophysical interpretations. The principal facts include station label, latitude, longitude, elevation, observed gravity value, and terrain correction for each station as well as the derived complete Bouguer and isostatic anomalies, reduced at 2.67 g/cm 3. Accuracy codes, where available, further document the data.
Synthesis and crystal structure of bis(di- n-butyldithiocarbamato)(1,10-phenanthroline)cadmium(II)
NASA Astrophysics Data System (ADS)
Ivanchenko, A. V.; Gromilov, S. A.; Zemskova, S. M.; Baidina, I. A.; Glinskaya, L. A.
2002-02-01
A new mixed-ligand complex, Cd(S2CN(C4H9)2)2Phen, is synthesized and investigated by thermal, element, and IR analyses and by diffractometry of polycrystals (DRON-3M, CuKα radiation, Ni filter). The crystal structure was determined on a CAD-4 Enraf-Nonius automatic diffractometer (MoKα radiation, θ from 1.5 to 25‡, 2325 nonzero independent reflections, 190 refined parameters, R = 0.036 for I > 2Σ(I)). Crystal data for C30H44CdN4S4 : a = 15.592(3), b = 22.724(5), c = 9.922(2) å, space group Pbcn, V = 3515.5(12) å3, Z = 4, M = 701.33, dcalc = 1.325 g/cm3. The structure involves monomeric molecules in which the cadmium atom has a distorted octahedral environment.
Terrestrial solar spectral modeling. [SOLTRAN, BRITE, and FLASH codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bird, R.E.
The utility of accurate computer codes for calculating the solar spectral irradiance under various atmospheric conditions was recognized. New absorption and extraterrestrial spectral data are introduced. Progress is made in radiative transfer modeling outside of the solar community, especially for space and military applications. Three rigorous radiative transfer codes SOLTRAN, BRITE, and FLASH are employed. The SOLTRAN and BRITE codes are described and results from their use are presented.
Tidal Signals In GOCE Measurements And Time-GCM
NASA Astrophysics Data System (ADS)
Hausler, K.; Hagan, M. E.; Lu, G.; Doornbos, E.; Bruinsma, S.; Forbes, J. M.
2013-12-01
In this paper we investigate tidal signatures in GOCE measurements during 15-24 November 2009 and complementary simulations with the Thermosphere-Ionosphere- Mesosphere-Electrodynamics General Circulation Model (TIME-GCM). The TIME-GCM simulations are driven by inputs that represent the prevailing solar and geomagnetic conditions along with tidal and planetary waves applied at the lower boundary (ca. 30km). For this pilot study, the resultant TIME-GCM densities are analyzed in two ways: 1) we use results along the GOCE orbital track, to calculate ascending/descending orbit longitude- latitude density difference and sum maps for direct comparison with the GOCE diagnostics, and 2) we conduct a complete analysis of TIME-GCM results to unambiguously characterize the simulated atmospheric tides and to attribute the observed longitude variations to specific tidal components. TIME-GCM captures some but not all of the observed longitudinal variability. The good data- model agreement for wave-2, wave-3, and wave-4 suggests that thermospheric impacts can be attributed to the DE1, DE2, DE3, S0, SE1, and SE2 tides. Discrepancies between TIME-GCM and GOCE results are most prominent in the wave-1 variations, and suggest that further refinement of the lower boundary forcing is necessary before we extend our analysis and interpretation to densities associated with the remainder of the GOCE mission.
Validation of the BOD POD with hydrostatic weighing: influence of body clothing.
Fields, D A; Hunter, G R; Goran, M I
2000-02-01
Whole body air-displacement plethysmography (BOD POD), a new body composition technique, was validated against hydrodensitometry (UWW) in 67 women wearing a one-piece swimsuit (OP) who represent a wide range of body fatness and age. Additionally, the effect of trapped isothermic air in clothing while in the BOD POD was examined by comparing different clothing schemes (a one-piece swimsuit (OP), two-piece swimsuit (TP), a hospital gown (HG), and a hospital gown previously included in a volume calibration (GC)) in a subset of 25 women. Cross-sectional data analysis. 67 healthy Caucasian females. Body density g/cm3 (Db) by BOD POD and UWW. In 67 females UWW Db (1.030+/-0.020 g/cm3) was higher (P<0.01) than BOD POD Db (1. 028+/-0.020 g/cm3). This is a difference of 1.0% fat. The R2 was 0. 94, SEE was 0.005 g/cm3 and the regression between Db by UWW and BOD POB did not significantly deviate from the line of identity. In the subset group of 25 subjects, OP Db (1.040+/-0.014 g/cm3) and TP Db (1.040+/-0.014 g/cm3) were significantly lower (P<0.01) than UWW Db (1.044+/-0.014 g/cm3) or a difference of 1.9% fat. The R2 was 0.86 and the SEE was 0.005 g/cm3 and the regression between Db by UWW and both OP and TP did not significantly deviate from the line of identity. HG Db (1.056+/-0.016 g/cm3) and GC Db (1.037+/-0.016 g/cm3) were significantly different (P<0.01) from UWW Db (1.044+/-0. 014 g/cm3). This difference in density translates to a difference of 5.5% and 3.2% fat respectively. The regression between Db by UWW and both HG and GC significantly deviated from the line of identity. This study supports the use of the BOD POD as a substitute for UWW. However, caution should be made in using the BOD POD if subjects are clothed in anything other than a tight fitting swimsuit.
NASA Astrophysics Data System (ADS)
Boss, Alan P.
2009-03-01
The disk instability mechanism for giant planet formation is based on the formation of clumps in a marginally gravitationally unstable protoplanetary disk, which must lose thermal energy through a combination of convection and radiative cooling if they are to survive and contract to become giant protoplanets. While there is good observational support for forming at least some giant planets by disk instability, the mechanism has become theoretically contentious, with different three-dimensional radiative hydrodynamics codes often yielding different results. Rigorous code testing is required to make further progress. Here we present two new analytical solutions for radiative transfer in spherical coordinates, suitable for testing the code employed in all of the Boss disk instability calculations. The testing shows that the Boss code radiative transfer routines do an excellent job of relaxing to and maintaining the analytical results for the radial temperature and radiative flux profiles for a spherical cloud with high or moderate optical depths, including the transition from optically thick to optically thin regions. These radial test results are independent of whether the Eddington approximation, diffusion approximation, or flux-limited diffusion approximation routines are employed. The Boss code does an equally excellent job of relaxing to and maintaining the analytical results for the vertical (θ) temperature and radiative flux profiles for a disk with a height proportional to the radial distance. These tests strongly support the disk instability mechanism for forming giant planets.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Dong, Xiquan; Kennedy, Aaron; Xi, Baike; Li, Zhanqing
2017-03-01
The planetary boundary layer turbulence and moist convection parameterizations have been modified recently in the NASA Goddard Institute for Space Studies (GISS) Model E2 atmospheric general circulation model (GCM; post-CMIP5, hereafter P5). In this study, single column model (SCM P5) simulated cloud fractions (CFs), cloud liquid water paths (LWPs) and precipitation were compared with Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) groundbased observations made during the period 2002-08. CMIP5 SCM simulations and GCM outputs over the ARM SGP region were also used in the comparison to identify whether the causes of cloud and precipitation biases resulted from either the physical parameterization or the dynamic scheme. The comparison showed that the CMIP5 SCM has difficulties in simulating the vertical structure and seasonal variation of low-level clouds. The new scheme implemented in the turbulence parameterization led to significantly improved cloud simulations in P5. It was found that the SCM is sensitive to the relaxation time scale. When the relaxation time increased from 3 to 24 h, SCM P5-simulated CFs and LWPs showed a moderate increase (10%-20%) but precipitation increased significantly (56%), which agreed better with observations despite the less accurate atmospheric state. Annual averages among the GCM and SCM simulations were almost the same, but their respective seasonal variations were out of phase. This suggests that the same physical cloud parameterization can generate similar statistical results over a long time period, but different dynamics drive the differences in seasonal variations. This study can potentially provide guidance for the further development of the GISS model.
Interactions between moist heating and dynamics in atmospheric predictability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straus, D.M.; Huntley, M.A.
1994-02-01
The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less
NASA Technical Reports Server (NTRS)
Salby, Murry
1998-01-01
A 3-dimensional model was developed to support mechanistic studies. The model solves the global primitive equations in isentropic coordinates, which directly characterize diabatic processes forcing the Brewer-Dobson circulation of the middle atmosphere. It's numerical formulation is based on Hough harmonics, which partition horizontal motion into its rotational and divergent components. These computational features, along with others, enable 3D integrations to be performed practically on RISC computer architecture, on which they can be iterated to support mechanistic studies. The model conserves potential vorticity quite accurately under adiabatic conditions. Forced by observed tropospheric structure, in which integrations are anchored, the model generates a diabatic circulation that is consistent with satellite observations of tracer behavior and diabatic cooling rates. The model includes a basic but fairly complete treatment of gas-phase photochemistry that represents some 20 chemical species and 50 governing reactions with diurnally-varying shortwave absorption. The model thus provides a reliable framework to study transport and underlying diabatic processes, which can then be compared against chemical and dynamical structure observed and in GCM integrations. Integrations with the Langley GCM were performed to diagnose feedback between simulated convection and the tropical circulation. These were studied in relation to tropospheric properties controlling moisture convergence and environmental conditions supporting deep convection, for comparison against mechanistic integrations of wave CISK that successfully reproduce the Madden-Julian Oscillation (MJO) of the tropical circulation. These comparisons were aimed at identifying and ultimately improving aspects of the convective simulation, with the objective of recovering a successful simulation of the MJO in the Langley GCM, behavior that should be important to budgets of upper-tropospheric water vapor and chemical species.
NASA Astrophysics Data System (ADS)
Hosseinzadehtalaei, Parisa; Tabari, Hossein; Willems, Patrick
2018-02-01
An ensemble of 88 regional climate model (RCM) simulations at 0.11° and 0.44° spatial resolutions from the EURO-CORDEX project is analyzed for central Belgium to investigate the projected impact of climate change on precipitation intensity-duration-frequency (IDF) relationships and extreme precipitation quantiles typically used in water engineering designs. The rate of uncertainty arising from the choice of RCM, driving GCM, and radiative concentration pathway (RCP4.5 & RCP8.5) is quantified using a variance decomposition technique after reconstruction of missing data in GCM × RCM combinations. A comparative analysis between the historical simulations of the EURO-CORDEX 0.11° and 0.44° RCMs shows higher precipitation intensities by the finer resolution runs, leading to a larger overestimation of the observations-based IDFs by the 0.11° runs. The results reveal that making a temporal stationarity assumption for the climate system may lead to underestimation of precipitation quantiles up to 70% by the end of this century. This projected increase is generally larger for the 0.11° RCMs compared with the 0.44° RCMs. The relative changes in extreme precipitation do depend on return period and duration, indicating an amplification for larger return periods and for smaller durations. The variance decomposition approach generally identifies RCM as the most dominant component of uncertainty in changes of more extreme precipitation (return period of 10 years) for both 0.11° and 0.44° resolutions, followed by GCM and RCP scenario. The uncertainties associated with cross-contributions of RCMs, GCMs, and RCPs play a non-negligible role in the associated uncertainties of the changes.
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo
2010-01-01
A multi-scale modeling system with unified physics has been developed at NASA Goddard Space Flight Center (GSFC). The system consists of an MMF, the coupled NASA Goddard finite-volume GCM (fvGCM) and Goddard Cumulus Ensemble model (GCE, a CRM); the state-of-the-art Weather Research and Forecasting model (WRF) and the stand alone GCE. These models can share the same microphysical schemes, radiation (including explicitly calculated cloud optical properties), and surface models that have been developed, improved and tested for different environments. In this talk, I will present: (1) A brief review on GCE model and its applications on the impact of the aerosol on deep precipitation processes, (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications). We are also performing the inline tracer calculation to comprehend the physical processes (i.e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes and mesoscale convective systems. In addition, high - resolution (spatial. 2km, and temporal, I minute) visualization showing the model results will be presented.
NASA Technical Reports Server (NTRS)
Wood, S. E.; Paige, D. A.
1993-01-01
Using a Leighton-Murray type diurnal and seasonal Mars thermal model, we found that it is possible to reproduce the seasonal variation in daily-averaged pressures (approximately 680-890 Pa) measured by Viking Lander 1 (VL1), during years without global dust storms, with a standard deviation of less than 5 Pa. In this simple model, surface CO2, frost condensation, and sublimation rates at each latitude are determined by the net effects of radiation, latent heat, and heat conduction in subsurface soil layers. An inherent assumption of our model is that the seasonal pressure variation is due entirely to the exchange of mass between the atmosphere and polar caps. However, the results of recent Mars GCM modeling have made it clear that there is a significant dynamical contribution to the seasonal pressure variation. This 'weather' component is primarily due to large-scale changes in atmospheric circulation, and its magnitude depends somewhat on the dust content of the atmosphere. The overall form of the theoretical weather component at the location of VL1, as calculated by the AMES GCM, remains the same over the typical range of Mars dust opacities.
NASA Technical Reports Server (NTRS)
Sotiropoulou, Rafaella-Eleni P.; Nenes, Athanasios; Adams, Peter J.; Seinfeld, John H.
2007-01-01
In situ observations of aerosol and cloud condensation nuclei (CCN) and the GISS GCM Model II' with an online aerosol simulation and explicit aerosol-cloud interactions are used to quantify the uncertainty in radiative forcing and autoconversion rate from application of Kohler theory. Simulations suggest that application of Koehler theory introduces a 10-20% uncertainty in global average indirect forcing and 2-11% uncertainty in autoconversion. Regionally, the uncertainty in indirect forcing ranges between 10-20%, and 5-50% for autoconversion. These results are insensitive to the range of updraft velocity and water vapor uptake coefficient considered. This study suggests that Koehler theory (as implemented in climate models) is not a significant source of uncertainty for aerosol indirect forcing but can be substantial for assessments of aerosol effects on the hydrological cycle in climatically sensitive regions of the globe. This implies that improvements in the representation of GCM subgrid processes and aerosol size distribution will mostly benefit indirect forcing assessments. Predictions of autoconversion, by nature, will be subject to considerable uncertainty; its reduction may require explicit representation of size-resolved aerosol composition and mixing state.
On the Development of a Deterministic Three-Dimensional Radiation Transport Code
NASA Technical Reports Server (NTRS)
Rockell, Candice; Tweed, John
2011-01-01
Since astronauts on future deep space missions will be exposed to dangerous radiations, there is a need to accurately model the transport of radiation through shielding materials and to estimate the received radiation dose. In response to this need a three dimensional deterministic code for space radiation transport is now under development. The new code GRNTRN is based on a Green's function solution of the Boltzmann transport equation that is constructed in the form of a Neumann series. Analytical approximations will be obtained for the first three terms of the Neumann series and the remainder will be estimated by a non-perturbative technique . This work discusses progress made to date and exhibits some computations based on the first two Neumann series terms.
Turbulent Radiation Effects in HSCT Combustor Rich Zone
NASA Technical Reports Server (NTRS)
Hall, Robert J.; Vranos, Alexander; Yu, Weiduo
1998-01-01
A joint UTRC-University of Connecticut theoretical program was based on describing coupled soot formation and radiation in turbulent flows using stretched flamelet theory. This effort was involved with using the model jet fuel kinetics mechanism to predict soot growth in flamelets at elevated pressure, to incorporate an efficient model for turbulent thermal radiation into a discrete transfer radiation code, and to couple die soot growth, flowfield, and radiation algorithm. The soot calculations used a recently developed opposed jet code which couples the dynamical equations of size-class dependent particle growth with complex chemistry. Several of the tasks represent technical firsts; among these are the prediction of soot from a detailed jet fuel kinetics mechanism, the inclusion of pressure effects in the soot particle growth equations, and the inclusion of the efficient turbulent radiation algorithm in a combustor code.
Is QSO 1146 + 111B,C due to lensing by a cosmic string?
NASA Technical Reports Server (NTRS)
Gott, J. R., III
1986-01-01
A newly discovered lens candidate, QSO 1146 + 111B,C, is discussed which appears to consist of two images of equal brightness of a quasar at redshift 1.01 separated by 2.6 arcmin. If this is produced by a cosmic string, its mass per unit length is about 4.0 x 10 to the 23rd g/cm or more. This value is large enough to be interesting for string-assisted galaxy formation and near the upper limits implied by the isotropy of the cosmic microwave background and constraints on gravitational radiation.
Path Toward a Unified Geometry for Radiation Transport
NASA Astrophysics Data System (ADS)
Lee, Kerry
The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex CAD models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN (high charge and energy transport code developed by NASA LaRC), are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The work-flow for doing radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats.
A conditional Granger causality model approach for group analysis in functional MRI
Zhou, Zhenyu; Wang, Xunheng; Klahr, Nelson J.; Liu, Wei; Arias, Diana; Liu, Hongzhi; von Deneen, Karen M.; Wen, Ying; Lu, Zuhong; Xu, Dongrong; Liu, Yijun
2011-01-01
Granger causality model (GCM) derived from multivariate vector autoregressive models of data has been employed for identifying effective connectivity in the human brain with functional MR imaging (fMRI) and to reveal complex temporal and spatial dynamics underlying a variety of cognitive processes. In the most recent fMRI effective connectivity measures, pairwise GCM has commonly been applied based on single voxel values or average values from special brain areas at the group level. Although a few novel conditional GCM methods have been proposed to quantify the connections between brain areas, our study is the first to propose a viable standardized approach for group analysis of an fMRI data with GCM. To compare the effectiveness of our approach with traditional pairwise GCM models, we applied a well-established conditional GCM to pre-selected time series of brain regions resulting from general linear model (GLM) and group spatial kernel independent component analysis (ICA) of an fMRI dataset in the temporal domain. Datasets consisting of one task-related and one resting-state fMRI were used to investigate connections among brain areas with the conditional GCM method. With the GLM detected brain activation regions in the emotion related cortex during the block design paradigm, the conditional GCM method was proposed to study the causality of the habituation between the left amygdala and pregenual cingulate cortex during emotion processing. For the resting-state dataset, it is possible to calculate not only the effective connectivity between networks but also the heterogeneity within a single network. Our results have further shown a particular interacting pattern of default mode network (DMN) that can be characterized as both afferent and efferent influences on the medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). These results suggest that the conditional GCM approach based on a linear multivariate vector autoregressive (MVAR) model can achieve greater accuracy in detecting network connectivity than the widely used pairwise GCM, and this group analysis methodology can be quite useful to extend the information obtainable in fMRI. PMID:21232892
Synchrotron Radiation Workshop (SRW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chubar, O.; Elleaume, P.
2013-03-01
"Synchrotron Radiation Workshop" (SRW) is a physical optics computer code for calculation of detailed characteristics of Synchrotron Radiation (SR) generated by relativistic electrons in magnetic fields of arbitrary configuration and for simulation of the radiation wavefront propagation through optical systems of beamlines. Frequency-domain near-field methods are used for the SR calculation, and the Fourier-optics based approach is generally used for the wavefront propagation simulation. The code enables both fully- and partially-coherent radiation propagation simulations in steady-state and in frequency-/time-dependent regimes. With these features, the code has already proven its utility for a large number of applications in infrared, UV, softmore » and hard X-ray spectral range, in such important areas as analysis of spectral performances of new synchrotron radiation sources, optimization of user beamlines, development of new optical elements, source and beamline diagnostics, and even complete simulation of SR based experiments. Besides the SR applications, the code can be efficiently used for various simulations involving conventional lasers and other sources. SRW versions interfaced to Python and to IGOR Pro (WaveMetrics), as well as cross-platform library with C API, are available.« less
NASA Astrophysics Data System (ADS)
Garate-Lopez, Itziar; Lebonnois, Sébastien
2017-04-01
A new simulation of Venus atmospheric circulation obtained with the LMD Venus GCM is described and the impact of cloud's latitudinal structure on the general circulation is analyzed. The model used here is based on that presented in Lebonnois et al. (2016). However, in the present simulation we consider the latitudinal variation of the cloud structure (Haus et al., 2014) both for the solar heating and to compute the infrared net-exchange rate matrix used in the radiative transfer module. The new cloud treatment affects mainly the balance in the angular momentum and the zonal wind distribution. Consequently, the agreement between the vertical profile of the modeled mean zonal wind and the profiles measured by different probes, is clearly improved from previous simulations in which zonal winds below the clouds were weak (roughly half the observed values). Moreover, the equatorial jet obtained at the base of the cloud deck is now more consistent with the observations. In Lebonnois et al. (2016) it was too strong compared to mid-latitudes, but in the present simulation the equatorial jet is less intense than the mid-latitude jets, in concordance with cloud-tracking measurements (Hueso et al., 2015). Since the atmospheric waves play a crucial role in the angular momentum budget of the Venus's atmospheric circulation, we analyze the wave activity by means of the Fast Fourier Transform technique studying the frequency spectrum of temperature, zonal and meridional wind fields. Modifications in the activity of the different types of waves present in the Venusian atmosphere compared to Lebonnois et al. (2016) are discussed, in terms of horizontal and vertical transport of the angular momentum by diurnal and semi-diurnal tides, barotropic and baroclinic waves, and Rossby and Kelvin type waves. Haus R., Kappel D. and Arnold G., 2014. Atmospheric thermal structure and cloud features in the southern hemisphere of Venus as retrieved from VIRTIS/VEX radiation measurements. Icarus 232, 232-248. Hueso R., Peralta J., Garate-Lopez I., et al., 2015. Six years of Venus winds at the upper cloud level from UV, visible and near infrared observations from VIRTIS on Venus express. Planet. Space Sci. 113-114, 78-99. Lebonnois S., Sugimoto N., and Gilli G., 2016. Wave analysis in the atmosphere of Venus below 100km altitude, simulated by the LMD Venus GCM. Icarus 278, 38-51.
NASA Astrophysics Data System (ADS)
Silvers, L. G.; Stevens, B. B.; Mauritsen, T.; Marco, G. A.
2015-12-01
The characteristics of clouds in General Circulation Models (GCMs) need to be constrained in a consistent manner with theory, observations, and high resolution models (HRMs). One way forward is to base improvements of parameterizations on high resolution studies which resolve more of the important dynamical motions and allow for less parameterizations. This is difficult because of the numerous differences between GCMs and HRMs, both technical and theoretical. Century long simulations at resolutions of 20-250 km on a global domain are typical of GCMs while HRMs often simulate hours at resolutions of 0.1km-5km on domains the size of a single GCM grid cell. The recently developed mode ICON provides a flexible framework which allows many of these difficulties to be overcome. This study uses the ICON model to compute SST perturbation simulations on multiple domains in a state of Radiative Convective Equilibrium (RCE) with parameterized convection. The domains used range from roughly the size of Texas to nearly half of Earth's surface area. All simulations use a doubly periodic domain with an effective distance between cell centers of 13 km and are integrated to a state of statistical stationarity. The primary analysis examines the mean characteristics of the cloud related fields and the feedback parameter of the simulations. It is shown that the simulated atmosphere of a GCM in RCE is sufficiently similar across a range of domain sizes to justify the use of RCE to study both a GCM and a HRM on the same domain with the goal of improved constraints on the parameterized clouds. The simulated atmospheres are comparable to what could be expected at midday in a typical region of Earth's tropics under calm conditions. In particular, the differences between the domains are smaller than differences which result from choosing different physics schemes. Significant convective organization is present on all domain sizes with a relatively high subsidence fraction. Notwithstanding the overall qualitative similarities of the simulations, quantitative differences lead to a surprisingly large sensitivity of the feedback parameter. This range of the feedback parameter is more than a factor of two and is similar to the range of feedbacks which were obtained by the CMIP5 models.
Prompt Radiation Protection Factors
2018-02-01
dimensional Monte-Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection factors (ratio of dose in the open to...radiation was performed using the three dimensional Monte- Carlo radiation transport code MCNP (Monte Carlo N-Particle) and the evaluation of the protection...by detonation of a nuclear device have placed renewed emphasis on evaluation of the consequences in case of such an event. The Defense Threat
CMacIonize: Monte Carlo photoionisation and moving-mesh radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Vandenbroucke, Bert; Wood, Kenneth
2018-02-01
CMacIonize simulates the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given time, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code and also as a moving-mesh code.
Generator Coordinate Method Analysis of Xe and Ba Isotopes
NASA Astrophysics Data System (ADS)
Higashiyama, Koji; Yoshinaga, Naotaka; Teruya, Eri
Nuclear structure of Xe and Ba isotopes is studied in terms of the quantum-number projected generator coordinate method (GCM). The GCM reproduces well the energy levels of high-spin states as well as low-lying states. The structure of the low-lying states is analyzed through the GCM wave functions.
Mammalian Gcm genes induce Hes5 expression by active DNA demethylation and induce neural stem cells.
Hitoshi, Seiji; Ishino, Yugo; Kumar, Akhilesh; Jasmine, Salma; Tanaka, Kenji F; Kondo, Takeshi; Kato, Shigeaki; Hosoya, Toshihiko; Hotta, Yoshiki; Ikenaka, Kazuhiro
2011-07-17
Signaling mediated by Notch receptors is crucial for the development of many organs and the maintenance of various stem cell populations. The activation of Notch signaling is first detectable by the expression of an effector gene, Hes5, in the neuroepithelium of mouse embryos at embryonic day (E) 8.0-8.5, and this activation is indispensable for the generation of neural stem cells. However, the molecular mechanism by which Hes5 expression is initiated in stem-producing cells remains unknown. We found that mammalian Gcm1 and Gcm2 (glial cells missing 1 and 2) are involved in the epigenetic regulation of Hes5 transcription by DNA demethylation independently of DNA replication. Loss of both Gcm genes and subsequent lack of Hes5 upregulation in the neuroepithelium of E7.5-8.5 Gcm1(-/-); Gcm2(-/-) mice resulted in the impaired induction of neural stem cells. Our data suggest that Hes5 expression is serially activated first by Gcms and later by the canonical Notch pathway.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.
2009-08-07
This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less
Radiation transport calculations for cosmic radiation.
Endo, A; Sato, T
2012-01-01
The radiation environment inside and near spacecraft consists of various components of primary radiation in space and secondary radiation produced by the interaction of the primary radiation with the walls and equipment of the spacecraft. Radiation fields inside astronauts are different from those outside them, because of the body's self-shielding as well as the nuclear fragmentation reactions occurring in the human body. Several computer codes have been developed to simulate the physical processes of the coupled transport of protons, high-charge and high-energy nuclei, and the secondary radiation produced in atomic and nuclear collision processes in matter. These computer codes have been used in various space radiation protection applications: shielding design for spacecraft and planetary habitats, simulation of instrument and detector responses, analysis of absorbed doses and quality factors in organs and tissues, and study of biological effects. This paper focuses on the methods and computer codes used for radiation transport calculations on cosmic radiation, and their application to the analysis of radiation fields inside spacecraft, evaluation of organ doses in the human body, and calculation of dose conversion coefficients using the reference phantoms defined in ICRP Publication 110. Copyright © 2012. Published by Elsevier Ltd.
Practical global oceanic state estimation
NASA Astrophysics Data System (ADS)
Wunsch, Carl; Heimbach, Patrick
2007-06-01
The problem of oceanographic state estimation, by means of an ocean general circulation model (GCM) and a multitude of observations, is described and contrasted with the meteorological process of data assimilation. In practice, all such methods reduce, on the computer, to forms of least-squares. The global oceanographic problem is at the present time focussed primarily on smoothing, rather than forecasting, and the data types are unlike meteorological ones. As formulated in the consortium Estimating the Circulation and Climate of the Ocean (ECCO), an automatic differentiation tool is used to calculate the so-called adjoint code of the GCM, and the method of Lagrange multipliers used to render the problem one of unconstrained least-squares minimization. Major problems today lie less with the numerical algorithms (least-squares problems can be solved by many means) than with the issues of data and model error. Results of ongoing calculations covering the period of the World Ocean Circulation Experiment, and including among other data, satellite altimetry from TOPEX/POSEIDON, Jason-1, ERS- 1/2, ENVISAT, and GFO, a global array of profiling floats from the Argo program, and satellite gravity data from the GRACE mission, suggest that the solutions are now useful for scientific purposes. Both methodology and applications are developing in a number of different directions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vu, Charles C.; Lanni, Thomas B.; Robertson, John M., E-mail: JRobertson@beaumont.edu
Purpose: The purposes of this study were to summarize recently published data on Medicare reimbursement to individual radiation oncologists and to identify the causes of variation in Medicare reimbursement in radiation oncology. Methods and Materials: The Medicare Provider Utilization and Payment Data: Physician and Other Supplier Public Use File (POSPUF), which details nearly all services provided by radiation oncologists in 2012, was used for this study. The data were filtered and analyzed by physician and by billing code. Statistical analysis was performed to identify differences in reimbursements based on sex, rurality, billing of technical services, or location in a certificatemore » of need (CON) state. Results: There were 4135 radiation oncologists who received a total of $1,499,625,803 in payments from Medicare in 2012. Seventy-five percent of radiation oncologists were male. The median reimbursement was $146,453. The code with the highest total reimbursement was 77418 (radiation treatment delivery intensity modulated radiation therapy [IMRT]). The most commonly billed evaluation and management (E/M) code for new visits was 99205 (49%). The most commonly billed E/M code for established visits was 99213 (54%). Forty percent of providers billed none of their new office visits using 99205 (the highest E/M billing code), whereas 34% of providers billed all of their new office visits using 99205. For the 1510 radiation oncologists (37%) who billed technical services, median Medicare reimbursement was $606,008, compared with $93,921 for all other radiation oncologists (P<.001). On multivariate analysis, technical services billing (P<.001), male sex (P<.001), and rural location (P=.007) were predictive of higher Medicare reimbursement. Conclusions: The billing of technical services, with their high capital and labor overhead requirements, limits any comparison in reimbursement between individual radiation oncologists or between radiation oncologists and other specialists. Male sex and rural practice location are independent predictors of higher total Medicare reimbursements.« less
Vu, Charles C; Lanni, Thomas B; Robertson, John M
2016-04-01
The purposes of this study were to summarize recently published data on Medicare reimbursement to individual radiation oncologists and to identify the causes of variation in Medicare reimbursement in radiation oncology. The Medicare Provider Utilization and Payment Data: Physician and Other Supplier Public Use File (POSPUF), which details nearly all services provided by radiation oncologists in 2012, was used for this study. The data were filtered and analyzed by physician and by billing code. Statistical analysis was performed to identify differences in reimbursements based on sex, rurality, billing of technical services, or location in a certificate of need (CON) state. There were 4135 radiation oncologists who received a total of $1,499,625,803 in payments from Medicare in 2012. Seventy-five percent of radiation oncologists were male. The median reimbursement was $146,453. The code with the highest total reimbursement was 77418 (radiation treatment delivery intensity modulated radiation therapy [IMRT]). The most commonly billed evaluation and management (E/M) code for new visits was 99205 (49%). The most commonly billed E/M code for established visits was 99213 (54%). Forty percent of providers billed none of their new office visits using 99205 (the highest E/M billing code), whereas 34% of providers billed all of their new office visits using 99205. For the 1510 radiation oncologists (37%) who billed technical services, median Medicare reimbursement was $606,008, compared with $93,921 for all other radiation oncologists (P<.001). On multivariate analysis, technical services billing (P<.001), male sex (P<.001), and rural location (P=.007) were predictive of higher Medicare reimbursement. The billing of technical services, with their high capital and labor overhead requirements, limits any comparison in reimbursement between individual radiation oncologists or between radiation oncologists and other specialists. Male sex and rural practice location are independent predictors of higher total Medicare reimbursements. Copyright © 2016 Elsevier Inc. All rights reserved.
Radiative transfer code SHARM for atmospheric and terrestrial applications
NASA Astrophysics Data System (ADS)
Lyapustin, A. I.
2005-12-01
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Δ-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
Radiative transfer code SHARM for atmospheric and terrestrial applications.
Lyapustin, A I
2005-12-20
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Delta-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
A comparison of skyshine computational methods.
Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J
2005-01-01
A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.
Bringing a Realistic Global Climate Modeling Experience to a Broader Audience
NASA Astrophysics Data System (ADS)
Sohl, L. E.; Chandler, M. A.; Zhou, J.
2010-12-01
EdGCM, the Educational Global Climate Model, was developed with the goal of helping students learn about climate change and climate modeling by giving them the ability to run a genuine NASA global climate model (GCM) on a desktop computer. Since EdGCM was first publicly released in January 2005, tens of thousands of users on seven continents have downloaded the software. EdGCM has been utilized by climate science educators from middle school through graduate school levels, and on occasion even by researchers who otherwise do not have ready access to climate model at national labs in the U.S. and elsewhere. The EdGCM software is designed to walk users through the same process a climate scientist would use in designing and running simulations, and analyzing and visualizing GCM output. Although the current interface design gives users a clear view of some of the complexities involved in using a climate model, it can be daunting for users whose main focus is on climate science rather than modeling per se. As part of the work funded by NASA’s Global Climate Change Education (GCCE) program, we will begin modifications to the user interface that will improve the accessibility of EdGCM to a wider array of users, especially at the middle school and high school levels, by: 1) Developing an automated approach (a “wizard”) to simplify the user experience in setting up new climate simulations; 2) Produce a catalog of “rediscovery experiments” that allow users to reproduce published climate model results, and in some cases compare model projections to real world data; and 3) Enhance distance learning and online learning opportunities through the development of a web-based interface. The prototypes for these modifications will then be presented to educators belonging to an EdGCM Users Group for feedback, so that we can further refine the EdGCM software, and thus deliver the tools and materials educators want and need across a wider range of learning environments.
Creation and utilization of a World Wide Web based space radiation effects code: SIREST
NASA Technical Reports Server (NTRS)
Singleterry, R. C. Jr; Wilson, J. W.; Shinn, J. L.; Tripathi, R. K.; Thibeault, S. A.; Noor, A. K.; Cucinotta, F. A.; Badavi, F. F.; Chang, C. K.; Qualls, G. D.;
2001-01-01
In order for humans and electronics to fully and safely operate in the space environment, codes like HZETRN (High Charge and Energy Transport) must be included in any designer's toolbox for design evaluation with respect to radiation damage. Currently, spacecraft designers do not have easy access to accurate radiation codes like HZETRN to evaluate their design for radiation effects on humans and electronics. Today, the World Wide Web is sophisticated enough to support the entire HZETRN code and all of the associated pre and post processing tools. This package is called SIREST (Space Ionizing Radiation Effects and Shielding Tools). There are many advantages to SIREST. The most important advantage is the instant update capability of the web. Another major advantage is the modularity that the web imposes on the code. Right now, the major disadvantage of SIREST will be its modularity inside the designer's system. This mostly comes from the fact that a consistent interface between the designer and the computer system to evaluate the design is incomplete. This, however, is to be solved in the Intelligent Synthesis Environment (ISE) program currently being funded by NASA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eder, D C; Anderson, R W; Bailey, D S
2009-10-05
The generation of neutron/gamma radiation, electromagnetic pulses (EMP), debris and shrapnel at mega-Joule class laser facilities (NIF and LMJ) impacts experiments conducted at these facilities. The complex 3D numerical codes used to assess these impacts range from an established code that required minor modifications (MCNP - calculates neutron and gamma radiation levels in complex geometries), through a code that required significant modifications to treat new phenomena (EMSolve - calculates EMP from electrons escaping from laser targets), to a new code, ALE-AMR, that is being developed through a joint collaboration between LLNL, CEA, and UC (UCSD, UCLA, and LBL) for debrismore » and shrapnel modelling.« less
Hassmiller Lich, Kristen; Urban, Jennifer Brown; Frerichs, Leah; Dave, Gaurav
2017-02-01
Group concept mapping (GCM) has been successfully employed in program planning and evaluation for over 25 years. The broader set of systems thinking methodologies (of which GCM is one), have only recently found their way into the field. We present an overview of systems thinking emerging from a system dynamics (SD) perspective, and illustrate the potential synergy between GCM and SD. As with GCM, participatory processes are frequently employed when building SD models; however, it can be challenging to engage a large and diverse group of stakeholders in the iterative cycles of divergent thinking and consensus building required, while maintaining a broad perspective on the issue being studied. GCM provides a compelling resource for overcoming this challenge, by richly engaging a diverse set of stakeholders in broad exploration, structuring, and prioritization. SD provides an opportunity to extend GCM findings by embedding constructs in a testable hypothesis (SD model) describing how system structure and changes in constructs affect outcomes over time. SD can be used to simulate the hypothesized dynamics inherent in GCM concept maps. We illustrate the potential of the marriage of these methodologies in a case study of BECOMING, a federally-funded program aimed at strengthening the cross-sector system of care for youth with severe emotional disturbances. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostin, Mikhail; Mokhov, Nikolai; Niita, Koji
A parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. It is intended to be used with older radiation transport codes implemented in Fortran77, Fortran 90 or C. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was developed and tested in conjunction with the MARS15 code. It is possible to use it with other codes such as PHITS, FLUKA andmore » MCNP after certain adjustments. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility can be used in single process calculations as well as in the parallel regime. The framework corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less
Explicit prediction of ice clouds in general circulation models
NASA Astrophysics Data System (ADS)
Kohler, Martin
1999-11-01
Although clouds play extremely important roles in the radiation budget and hydrological cycle of the Earth, there are large quantitative uncertainties in our understanding of their generation, maintenance and decay mechanisms, representing major obstacles in the development of reliable prognostic cloud water schemes for General Circulation Models (GCMs). Recognizing their relative neglect in the past, both observationally and theoretically, this work places special focus on ice clouds. A recent version of the UCLA - University of Utah Cloud Resolving Model (CRM) that includes interactive radiation is used to perform idealized experiments to study ice cloud maintenance and decay mechanisms under various conditions in term of: (1) background static stability, (2) background relative humidity, (3) rate of cloud ice addition over a fixed initial time-period and (4) radiation: daytime, nighttime and no-radiation. Radiation is found to have major effects on the life-time of layer-clouds. Optically thick ice clouds decay significantly slower than expected from pure microphysical crystal fall-out (taucld = 0.9--1.4 h as opposed to no-motion taumicro = 0.5--0.7 h). This is explained by the upward turbulent fluxes of water induced by IR destabilization, which partially balance the downward transport of water by snowfall. Solar radiation further slows the ice-water decay by destruction of the inversion above cloud-top and the resulting upward transport of water. Optically thin ice clouds, on the other hand, may exhibit even longer life-times (>1 day) in the presence of radiational cooling. The resulting saturation mixing ratio reduction provides for a constant cloud ice source. These CRM results are used to develop a prognostic cloud water scheme for the UCLA-GCM. The framework is based on the bulk water phase model of Ose (1993). The model predicts cloud liquid water and cloud ice separately, and which is extended to split the ice phase into suspended cloud ice (predicted) and falling snow (diagnosed) components. An empirical parameterization of the effect of upward turbulent water fluxes in cloud layers is obtained from the CRM simulations by (1) identifying the time-scale of conversion of cloud ice to snow as the key parameter, and (2) regressing it onto cloud differential IR heating and environmental static stability. The updated UCLA-GCM achieves close agreement with observations in global mean top of atmosphere fluxes (within 1--4 W/m2). Artificially suppressing the impact of cloud turbulent fluxes reduces the global mean ice water path by a factor of 3 and produces errors in each of solar and IR fluxes at the top of atmosphere of about 5--6 W/m2.
Bias correction method for climate change impact assessment at a basin scale
NASA Astrophysics Data System (ADS)
Nyunt, C.; Jaranilla-sanchez, P. A.; Yamamoto, A.; Nemoto, T.; Kitsuregawa, M.; Koike, T.
2012-12-01
Climate change impact studies are mainly based on the general circulation models GCM and these studies play an important role to define suitable adaptation strategies for resilient environment in a basin scale management. For this purpose, this study summarized how to select appropriate GCM to decrease the certain uncertainty amount in analysis. This was applied to the Pampanga, Angat and Kaliwa rivers in Luzon Island, the main island of Philippine and these three river basins play important roles in irrigation water supply, municipal water source for Metro Manila. According to the GCM scores of both seasonal evolution of Asia summer monsoon and spatial correlation and root mean squared error of atmospheric variables over the region, finally six GCM is chosen. Next, we develop a complete, efficient and comprehensive statistical bias correction scheme covering extremes events, normal rainfall and frequency of dry period. Due to the coarse resolution and parameterization scheme of GCM, extreme rainfall underestimation, too many rain days with low intensity and poor representation of local seasonality have been known as bias of GCM. Extreme rainfall has unusual characteristics and it should be focused specifically. Estimated maximum extreme rainfall is crucial for planning and design of infrastructures in river basin. Developing countries have limited technical, financial and management resources for implementing adaptation measures and they need detailed information of drought and flood for near future. Traditionally, the analysis of extreme has been examined using annual maximum series (AMS) adjusted to a Gumbel or Lognormal distribution. The drawback is the loss of the second, third etc, largest rainfall. Another approach is partial duration series (PDS) constructed using the values above a selected threshold and permit more than one event per year. The generalized Pareto distribution (GPD) has been used to model PDS and it is the series of excess over a threshold. In this study, the lowest value of AMS of observed is selected as threshold and simultaneously same frequency is considered as extremes in corresponding GCM gridded series. After fitting to GP distribution, bias corrected GCM extreme is found by using the inverse function of observed extremes. The results show it can remove bias effectively. For projected climate, the same transfer function between historical observed and GCM was applied. Moreover, frequency analysis of maximum extreme intensity estimation was done for validation and then approximate for near future by using identical function as past. To fix the error in the number of no rain days of GCM, ranking order statistics is used and define in GCM same as the frequency of wet days in observed station. After this rank, GCM output will be zero and identify same threshold for future projection. Normal rainfall is classified as between threshold of extreme and no rain day. We assume monthly normal rainfall follow gamma distribution. Then, we mapped the CDF of GCM normal rainfall to station's one in each month and bias corrected rainfall is available. In summary, bias of GCM have been addressed efficiently and validated at point scale by seasonal climatology and at all stations for evaluating downscaled rainfall performance. The results show bias corrected and downscaled scheme is good enough for climate impact study.
Cattenoz, Pierre B.; Popkova, Anna; Southall, Tony D.; Aiello, Giuseppe; Brand, Andrea H.; Giangrande, Angela
2016-01-01
High-throughput screens allow us to understand how transcription factors trigger developmental processes, including cell specification. A major challenge is identification of their binding sites because feedback loops and homeostatic interactions may mask the direct impact of those factors in transcriptome analyses. Moreover, this approach dissects the downstream signaling cascades and facilitates identification of conserved transcriptional programs. Here we show the results and the validation of a DNA adenine methyltransferase identification (DamID) genome-wide screen that identifies the direct targets of Glide/Gcm, a potent transcription factor that controls glia, hemocyte, and tendon cell differentiation in Drosophila. The screen identifies many genes that had not been previously associated with Glide/Gcm and highlights three major signaling pathways interacting with Glide/Gcm: Notch, Hedgehog, and JAK/STAT, which all involve feedback loops. Furthermore, the screen identifies effector molecules that are necessary for cell-cell interactions during late developmental processes and/or in ontogeny. Typically, immunoglobulin (Ig) domain–containing proteins control cell adhesion and axonal navigation. This shows that early and transiently expressed fate determinants not only control other transcription factors that, in turn, implement a specific developmental program but also directly affect late developmental events and cell function. Finally, while the mammalian genome contains two orthologous Gcm genes, their function has been demonstrated in vertebrate-specific tissues, placenta, and parathyroid glands, begging questions on the evolutionary conservation of the Gcm cascade in higher organisms. Here we provide the first evidence for the conservation of Gcm direct targets in humans. In sum, this work uncovers novel aspects of cell specification and sets the basis for further understanding of the role of conserved Gcm gene regulatory cascades. PMID:26567182
A Radiation Shielding Code for Spacecraft and Its Validation
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Cucinotta, F. A.; Singleterry, R. C.; Wilson, J. W.; Badavi, F. F.; Badhwar, G. D.; Miller, J.; Zeitlin, C.; Heilbronn, L.; Tripathi, R. K.
2000-01-01
The HZETRN code, which uses a deterministic approach pioneered at NASA Langley Research Center, has been developed over the past decade to evaluate the local radiation fields within sensitive materials (electronic devices and human tissue) on spacecraft in the space environment. The code describes the interactions of shield materials with the incident galactic cosmic rays, trapped protons, or energetic protons from solar particle events in free space and low Earth orbit. The content of incident radiations is modified by atomic and nuclear reactions with the spacecraft and radiation shield materials. High-energy heavy ions are fragmented into less massive reaction products, and reaction products are produced by direct knockout of shield constituents or from de-excitation products. An overview of the computational procedures and database which describe these interactions is given. Validation of the code with recent Monte Carlo benchmarks, and laboratory and flight measurement is also included.
Numerical Investigation of Radiative Heat Transfer in Laser Induced Air Plasmas
NASA Technical Reports Server (NTRS)
Liu, J.; Chen, Y. S.; Wang, T. S.; Turner, James E. (Technical Monitor)
2001-01-01
Radiative heat transfer is one of the most important phenomena in the laser induced plasmas. This study is intended to develop accurate and efficient methods for predicting laser radiation absorption and plasma radiative heat transfer, and investigate the plasma radiation effects in laser propelled vehicles. To model laser radiation absorption, a ray tracing method along with the Beer's law is adopted. To solve the radiative transfer equation in the air plasmas, the discrete transfer method (DTM) is selected and explained. The air plasma radiative properties are predicted by the LORAN code. To validate the present nonequilibrium radiation model, several benchmark problems are examined and the present results are found to match the available solutions. To investigate the effects of plasma radiation in laser propelled vehicles, the present radiation code is coupled into a plasma aerodynamics code and a selected problem is considered. Comparisons of results at different cases show that plasma radiation plays a role of cooling plasma and it lowers the plasma temperature by about 10%. This change in temperature also results in a reduction of the coupling coefficient by about 10-20%. The present study indicates that plasma radiation modeling is very important for accurate modeling of aerodynamics in a laser propelled vehicle.
NASA Astrophysics Data System (ADS)
Häusler, K.; Hagan, M. E.; Baumgaertner, A. J. G.; Maute, A.; Lu, G.; Doornbos, E.; Bruinsma, S.; Forbes, J. M.; Gasperini, F.
2014-08-01
We report on a new source of tidal variability in the National Center for Atmospheric Research thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (TIME-GCM). Lower boundary forcing of the TIME-GCM for a simulation of November-December 2009 based on 3-hourly Modern-Era Retrospective Analysis for Research and Application (MERRA) reanalysis data includes day-to-day variations in both diurnal and semidiurnal tides of tropospheric origin. Comparison with TIME-GCM results from a heretofore standard simulation that includes climatological tropospheric tides from the global-scale wave model reveal evidence of the impacts of MERRA forcing throughout the model domain, including measurable tidal variability in the TIME-GCM upper thermosphere. Additional comparisons with measurements made by the Gravity field and steady-state Ocean Circulation Explorer satellite show improved TIME-GCM capability to capture day-to-day variations in thermospheric density for the November-December 2009 period with the new MERRA lower boundary forcing.
Tobin, Jr., Kenneth W.; Bingham, Philip R.; Hawari, Ayman I.
2012-11-06
An imaging system employing a coded aperture mask having multiple pinholes is provided. The coded aperture mask is placed at a radiation source to pass the radiation through. The radiation impinges on, and passes through an object, which alters the radiation by absorption and/or scattering. Upon passing through the object, the radiation is detected at a detector plane to form an encoded image, which includes information on the absorption and/or scattering caused by the material and structural attributes of the object. The encoded image is decoded to provide a reconstructed image of the object. Because the coded aperture mask includes multiple pinholes, the radiation intensity is greater than a comparable system employing a single pinhole, thereby enabling a higher resolution. Further, the decoding of the encoded image can be performed to generate multiple images of the object at different distances from the detector plane. Methods and programs for operating the imaging system are also disclosed.
Simulating the Current Water Cycle with the NASA Ames Mars Global Climate Model
NASA Astrophysics Data System (ADS)
Kahre, M. A.; Haberle, R. M.; Hollingsworth, J. L.; Brecht, A. S.; Urata, R. A.; Montmessin, F.
2017-12-01
The water cycle is a critical component of the current Mars climate system, and it is now widely recognized that water ice clouds significantly affect the nature of the simulated water cycle. Two processes are key to implementing clouds in a Mars global climate model (GCM): the microphysical processes of formation and dissipation, and their radiative effects on atmospheric heating/cooling rates. Together, these processes alter the thermal structure, change the atmospheric dynamics, and regulate inter-hemispheric transport. We have made considerable progress using the NASA Ames Mars GCM to simulate the current-day water cycle with radiatively active clouds. Cloud fields from our baseline simulation are in generally good agreement with observations. The predicted seasonal extent and peak IR optical depths are consistent MGS/TES observations. Additionally, the thermal response to the clouds in the aphelion cloud belt (ACB) is generally consistent with observations and other climate model predictions. Notably, there is a distinct gap in the predicted clouds over the North Residual Cap (NRC) during local summer, but the clouds reappear in this simulation over the NRC earlier than the observations indicate. Polar clouds are predicted near the seasonal CO2 ice caps, but the column thicknesses of these clouds are generally too thick compared to observations. Our baseline simulation is dry compared to MGS/TES-observed water vapor abundances, particularly in the tropics and subtropics. These areas of disagreement appear to be a consistent with other current water cycle GCMs. Future avenues of investigation will target improving our understanding of what controls the vertical extent of clouds and the apparent seasonal evolution of cloud particle sizes within the ACB.
NASA Technical Reports Server (NTRS)
Menon, Surabi; DelGenio, Anthony D.; Koch, Dorothy; Tselioudis, George; Hansen, James E. (Technical Monitor)
2001-01-01
We describe the coupling of the Goddard Institute for Space Studies (GISS) general circulation model (GCM) to an online sulfur chemistry model and source models for organic matter and sea-salt that is used to estimate the aerosol indirect effect. The cloud droplet number concentration is diagnosed empirically from field experiment datasets over land and ocean that observe droplet number and all three aerosol types simultaneously; corrections are made for implied variations in cloud turbulence levels. The resulting cloud droplet number is used to calculate variations in droplet effective radius, which in turn allows us to predict aerosol effects on cloud optical thickness and microphysical process rates. We calculate the aerosol indirect effect by differencing the top-of-the-atmosphere net cloud radiative forcing for simulations with present-day vs. pre-industrial emissions. Both the first (radiative) and second (microphysical) indirect effects are explored. We test the sensitivity of our results to cloud parameterization assumptions that control the vertical distribution of cloud occurrence, the autoconversion rate, and the aerosol scavenging rate, each of which feeds back significantly on the model aerosol burden. The global mean aerosol indirect effect for all three aerosol types ranges from -1.55 to -4.36 W m(exp -2) in our simulations. The results are quite sensitive to the pre-industrial background aerosol burden, with low pre-industrial burdens giving strong indirect effects, and to a lesser extent to the anthropogenic aerosol burden, with large burdens giving somewhat larger indirect effects. Because of this dependence on the background aerosol, model diagnostics such as albedo-particle size correlations and column cloud susceptibility, for which satellite validation products are available, are not good predictors of the resulting indirect effect.
NASA Astrophysics Data System (ADS)
Lefèvre, Maxence; Spiga, Aymeric; Lebonnois, Sébastien
2017-04-01
The impact of the cloud convective layer of the atmosphere of Venus on the global circulation remains unclear. The recent observations of gravity waves at the top of the cloud by the Venus Express mission provided some answers. These waves are not resolved at the scale of global circulation models (GCM), therefore we developed an unprecedented 3D turbulence-resolving Large-Eddy Simulations (LES) Venusian model (Lefèvre et al, 2016 JGR Planets) using the Weather Research and Forecast terrestrial model. The forcing consists of three different heating rates : two radiative ones for solar and infrared and one associated with the adiabatic cooling/warming of the global circulation. The rates are extracted from the Laboratoire de Météorlogie Dynamique (LMD) Venus GCM using two different cloud models. Thus we are able to characterize the convection and associated gravity waves in function of latitude and local time. To assess the impact of the global circulation on the convective layer, we used rates from a 1D radiative-convective model. The resolved layer, taking place between 1.0 105 and 3.8 104 Pa (48-53 km), is organized as polygonal closed cells of about 10 km wide with vertical wind of several meters per second. The convection emits gravity waves both above and below the convective layer leading to temperature perturbations of several tenths of Kelvin with vertical wavelength between 1 and 3 km and horizontal wavelength from 1 to 10 km. The thickness of the convective layer and the amplitudes of waves are consistent with observations, though slightly underestimated. The global dynamics heating greatly modify the convective layer.
NASA Technical Reports Server (NTRS)
Menon, Surabi; DelGenio, Anthony D.; Koch, Dorothy; Tselioudis, George; Hansen, James E. (Technical Monitor)
2001-01-01
We describe the coupling of the Goddard Institute for Space Studies (GISS) general circulation model (GCM) to an online sulfur chemistry model and source models for organic matter and sea-salt that is used to estimate the aerosol indirect effect. The cloud droplet number concentration is diagnosed empirically from field experiment datasets over land and ocean that observe droplet number and all three aerosol types simultaneously; corrections are made for implied variations in cloud turbulence levels. The resulting cloud droplet number is used to calculate variations in droplet effective radius, which in turn allows us to predict aerosol effects on cloud optical thickness and microphysical process rates. We calculate the aerosol indirect effect by differencing the top-of-the-atmosphere net cloud radiative forcing for simulations with present-day vs. pre-industrial emissions. Both the first (radiative) and second (microphysical) indirect effects are explored. We test the sensitivity of our results to cloud parameterization assumptions that control the vertical distribution of cloud occurrence, the autoconversion rate, and the aerosol scavenging rate, each of which feeds back significantly on the model aerosol burden. The global mean aerosol indirect effect for all three aerosol types ranges from -1.55 to -4.36 W/sq m in our simulations. The results are quite sensitive to the pre-industrial background aerosol burden, with low pre-industrial burdens giving strong indirect effects, and to a lesser extent to the anthropogenic aerosol burden, with large burdens giving somewhat larger indirect effects. Because of this dependence on the background aerosol, model diagnostics such as albedo-particle size correlations and column cloud susceptibility, for which satellite validation products are available, are not good predictors of the resulting indirect effect.
Evaluation of a Cloud Resolving Model Using TRMM Observations for Multiscale Modeling Applications
NASA Technical Reports Server (NTRS)
Posselt, Derek J.; L'Ecuyer, Tristan; Tao, Wei-Kuo; Hou, Arthur Y.; Stephens, Graeme L.
2007-01-01
The climate change simulation community is moving toward use of global cloud resolving models (CRMs), however, current computational resources are not sufficient to run global CRMs over the hundreds of years necessary to produce climate change estimates. As an intermediate step between conventional general circulation models (GCMs) and global CRMs, many climate analysis centers are embedding a CRM in each grid cell of a conventional GCM. These Multiscale Modeling Frameworks (MMFs) represent a theoretical advance over the use of conventional GCM cloud and convection parameterizations, but have been shown to exhibit an overproduction of precipitation in the tropics during the northern hemisphere summer. In this study, simulations of clouds, precipitation, and radiation over the South China Sea using the CRM component of the NASA Goddard MMF are evaluated using retrievals derived from the instruments aboard the Tropical Rainfall Measuring Mission (TRMM) satellite platform for a 46-day time period that spans 5 May - 20 June 1998. The NASA Goddard Cumulus Ensemble (GCE) model is forced with observed largescale forcing derived from soundings taken during the intensive observing period of the South China Sea Monsoon Experiment. It is found that the GCE configuration used in the NASA Goddard MMF responds too vigorously to the imposed large-scale forcing, accumulating too much moisture and producing too much cloud cover during convective phases, and overdrying the atmosphere and suppressing clouds during monsoon break periods. Sensitivity experiments reveal that changes to ice cloud microphysical parameters have a relatively large effect on simulated clouds, precipitation, and radiation, while changes to grid spacing and domain length have little effect on simulation results. The results motivate a more detailed and quantitative exploration of the sources and magnitude of the uncertainty associated with specified cloud microphysical parameters in the CRM components of MMFs.
Biogeophysical consequences of a tropical deforestation scenario: A GCM simulation study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sud, Y.C.; Lau, W.K.M.; Walker, G.K.
1996-12-01
Two 3-year (1979-1982) integrations were carried out with a version of the GLA GCM that contains the Simple Biosphere Model (SiB) for simulating land-atmosphere interactions. The control case used the usual SiB vegetation cover (comprising 12 vegetation types), while its twin, the deforestation case, imposed a scenario in which all tropical rainforests were entirely replaced by grassland. Except for this difference, all other initial and prescribed boundary conditions were kept identical in both integrations. An intercomparison of the integrations shows that tropical: deforestation decreases evapotranspiration and increases land surface outgoing longwave radiation and sensible heat flux, thereby warming and dryingmore » the planetary boundary layer. This happens despite the reduced absorption of solar radiation due to higher surface albedo of the deforested land. Produces significant and robust local as well as global climate changes. The local effect includes significant changes (mostly reductions) in precipitation and diabatic heating, while the large-scale effect is to weaken the Hadley circulation but invigorate the southern Ferrel cell, drawing larger air mass from the indirect polar cells. Decreases the surface stress (drag force) owing to reduced surface roughness of deforested land, which in turn intensifies winds in the planetary boundary layer, thereby affecting the dynamic structure of moisture convergence. The simulated surface winds are about 70% stronger and are accompanied by significant changes in the power spectrum of the annual cycle of surface and PBL winds and precipitation. Our results broadly confirm several findings of recent tropical deforestation simulation experiments. In addition, some global-scale climatic influences of deforestation not identified in earlier studies are delineated. 57 refs., 10 figs., 3 tabs.« less
Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials.
Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin; Cui, Tie Jun
2017-09-01
Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits "0" and "1" to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency-spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments.
Al 1s-2p Absorption Spectroscopy of Shock-Wave Heating and Compression in Laser-Driven Planar Foil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sawada, H.; Regan, S.P.; Radha, P.B.
Time-resolved Al 1s-2p absorption spectroscopy is used to diagnose direct-drive, shock-wave heating and compression of planar targets having nearly Fermi-degenerate plasma conditions (Te ~ 10–40 eV, rho ~ 3–11 g/cm^3) on the OMEGA Laser System [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. A planar plastic foil with a buried Al tracer layer was irradiated with peak intensities of 10^14–10^15 W/cm^2 and probed with the pseudocontinuum M-band emission from a point-source Sm backlighter in the range of 1.4–1.7 keV. The laser ablation process launches 10–70 Mbar shock waves into the CH/Al/CH target. The Al 1s-2p absorption spectra weremore » analyzed using the atomic physic code PRISMSPECT to infer Te and rho in the Al layer, assuming uniform plasma conditions during shock-wave heating, and to determine when the heat front penetrated the Al layer. The drive foils were simulated with the one-dimensional hydrodynamics code LILAC using a flux-limited (f =0.06 and f =0.1) and nonlocal thermal-transport model [V. N. Goncharov et al., Phys. Plasmas 13, 012702 (2006)]. The predictions of simulated shock-wave heating and the timing of heat-front penetration are compared to the observations. The experimental results for a wide variety of laser-drive conditions and buried depths have shown that the LILAC predictions using f = 0.06 and the nonlocal model accurately model the shock-wave heating and timing of the heat-front penetration while the shock is transiting the target. The observed discrepancy between the measured and simulated shock-wave heating at late times of the drive can be explained by the reduced radiative heating due to lateral heat flow in the corona.« less
The Plane-parallel Albedo Bias of Liquid Clouds from MODIS Observations
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Cahalan, Robert F.; Platnick, Steven
2007-01-01
In our most advanced modeling tools for climate change prediction, namely General Circulation Models (GCMs), the schemes used to calculate the budget of solar and thermal radiation commonly assume that clouds are horizontally homogeneous at scales as large as a few hundred kilometers. However, this assumption, used for convenience, computational speed, and lack of knowledge on cloud small scale variability, leads to erroneous estimates of the radiation budget. This paper provides a global picture of the solar radiation errors at scales of approximately 100 km due to warm (liquid phase) clouds only. To achieve this, we use cloud retrievals from the instrument MODIS on the Terra and Aqua satellites, along with atmospheric and surface information, as input into a GCM-style radiative transfer algorithm. Since the MODIS product contains information on cloud variability below 100 km we can run the radiation algorithm both for the variable and the (assumed) homogeneous clouds. The difference between these calculations for reflected or transmitted solar radiation constitutes the bias that GCMs would commit if they were able to perfectly predict the properties of warm clouds, but then assumed they were homogeneous for radiation calculations. We find that the global average of this bias is approx.2-3 times larger in terms of energy than the additional amount of thermal energy that would be trapped if we were to double carbon dioxide from current concentrations. We should therefore make a greater effort to predict horizontal cloud variability in GCMs and account for its effects in radiation calculations.
Piltingsrud, H V
1979-12-01
Bismuth germanate is a scintillation material with very high z, and high density (7.13 g/cm3). It is a rugged, nonhygroscopic, crystalline material with room-temperature scintillation properties described by previous investigators as having a light yield approximately 8% of that of NaI(Tl), emission peak at approximately 480 nm, decay constant of 0.3 microsec, and energy resolution congruent to 15% (FWHM) for Cs-137 gamma radiations. These properties make it an excellent candidate for applications involving the detection of high-energy gamma photons and positron annihilation radiation, particularly when good spatial resolution is desired. At room temperature, however, the application of this material is somewhat limited by low light output and poor energy resolution. This paper presents new data on the scintillation properties of bismuth germanate as a function of temperature from -- 196 degrees C to j0 degrees C. Low-temperature use of the material is shown to greatly improve its light yield and energy resolution. The implications of this work to the design of imaging devices for high-energy radiation in health physics and nuclear medicine are discussed.
Ling, Tung-Chai; Poon, Chi-Sun; Lam, Wai-Shung; Chan, Tai-Po; Fung, Karl Ka-Lok
2012-01-15
Recycled glass derived from cathode ray tubes (CRT) glass with a specific gravity of approximately 3.0 g/cm(3) can be potentially suitable to be used as fine aggregate for preparing cement mortars for X-ray radiation-shielding applications. In this work, the effects of using crushed glass derived from crushed CRT funnel glass (both acid washed and unwashed) and crushed ordinary beverage container glass at different replacement levels (0%, 25%, 50%, 75% and 100% by volume) of sand on the mechanical properties (strength and density) and radiation-shielding performance of the cement-sand mortars were studied. The results show that all the prepared mortars had compressive strength values greater than 30 MPa which are suitable for most building applications based on ASTM C 270. The density and shielding performance of the mortar prepared with ordinary crushed (lead-free) glass was similar to the control mortar. However, a significant enhancement of radiation-shielding was achieved when the CRT glasses were used due to the presence of lead in the glass. In addition, the radiation shielding contribution of CRT glasses was more pronounced when the mortar was subject to a higher level of X-ray energy. Copyright © 2011 Elsevier B.V. All rights reserved.
Primary proton and helium spectra around the knee observed by the Tibet air-shower experiment
NASA Astrophysics Data System (ADS)
Jing, Huang; Tibet ASγ Collaboration
A hybrid experiment was carried out to study the cosmic-ray primary composition in the 'knee' energy region. The experimental set-up consists of the Tibet-II air shower array( AS ), the emulsion chamber ( EC ) and the burst detector ( BD ) which are operated simulteneously and provides us information on the primary species. The experiment was carried out at Yangbajing (4,300 m a.s.l., 606 g/cm2) in Tibet during the period from 1996 through 1999. We have already reported the primary proton flux around the knee region based on the simulation code COSMOS. In this paper, we present the primary proton and helium spectra around the knee region. We also extensively examine the simulation codes COSMOS ad-hoc and CORSIKA with interaction models of QGSJET01, DPMJET 2.55, SIBYLL 2.1, VENUS 4.125, HDPM, and NEXUS 2. Based on these calculations, we briefly discuss on the systematic errors involved in our experimental results due to the Monte Carlo simulation.
The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.
2006-01-01
Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.
Striatal Infusion of Glial Conditioned Medium Diminishes Huntingtin Pathology in R6/1 Mice
Perucho, Juan; Casarejos, Maria José; Gómez, Ana; Ruíz, Carolina; Fernández-Estevez, Maria Ángeles; Muñoz, Maria Paz; de Yébenes, Justo García; Mena, Maria Ángeles
2013-01-01
Huntington's disease is a neurodegenerative disorder caused by an expansion of CAG repeats in the huntingtin gene which produces widespread neuronal and glial pathology. We here investigated the possible therapeutic role of glia or glial products in Huntington's disease using striatal glial conditioned medium (GCM) from fetus mice (E16) continuously infused for 15 and 30 days with osmotic minipumps into the left striatum of R6/1 mice. Animals infused with GCM had significantly less huntingtin inclusions in the ipsilateral cerebral cortex and in the ipsilateral and contralateral striata than mice infused with cerebrospinal fluid. The numbers of DARPP-32 and TH positive neurons were also greater in the ipsilateral but not contralateral striata and substantia nigra, respectively, suggesting a neuroprotective effect of GCM on efferent striatal and nigro-striatal dopamine neurons. GCM increases activity of the autophagic pathway, as shown by the reduction of autophagic substrate, p-62, and the augmentation of LC3 II, Beclin-1 and LAMP-2 protein levels, direct markers of autophagy, in GCM infused mice. GCM also increases BDNF levels. These results suggest that CGM should be further explored as a putative neuroprotective agent in Huntington's disease. PMID:24069174
Meeting the Next Generation Science Standards Through "Rediscovered" Climate Model Experiments
NASA Astrophysics Data System (ADS)
Sohl, L. E.; Chandler, M. A.; Zhou, J.
2013-12-01
Since the Educational Global Climate Model (EdGCM) Project made its debut in January 2005, over 150 institutions have employed EdGCM software for a variety of uses ranging from short lab exercises to semester-long and year-long thesis projects. The vast majority of these EdGCM adoptees have been at the undergraduate and graduate levels, with few users at the K-12 level. The K-12 instructors who have worked with EdGCM in professional development settings have commented that, although EdGCM can be used to illustrate a number of the Disciplinary Core Ideas and connects to many of the Common Core State Standards across subjects and grade levels, significant hurdles preclude easy integration of EdGCM into their curricula. Time constraints, a scarcity of curriculum materials, and classroom technology are often mentioned as obstacles in providing experiences to younger grade levels in realistic climate modeling research. Given that the NGSS incorporates student performance expectations relating to Earth System Science, and to climate science and the human dimension in particular, we feel that a streamlined version of EdGCM -- one that eliminates the need to run the climate model on limited computing resources, and provides a more guided climate modeling experience -- would be highly beneficial for the K-12 community. This new tool currently under development, called EzGCM, functions through a browser interface, and presents "rediscovery experiments" that allow students to do their own exploration of model output from published climate experiments, or from sensitivity experiments designed to illustrate how climate models as well as the climate system work. The experiments include background information and sample questions, with more extensive notes for instructors so that the instructors can design their own reflection questions or follow-on activities relating to physical or human impacts, as they choose. An added benefit of the EzGCM tool is that, like EdGCM, it helps illustrate the process of doing research on a complex topic, in a way that builds upon earlier experiences in inquiry-based learning. By having students work through a multi-stage process that requires them to plan several steps ahead, through data processing, analysis and interpretation, they learn how to do research in addition to improving their understanding of climate change.
Solar Cycle Variations of SABER CO2 and MLS H2O in the Mesosphere and Lower Thermosphere Region
NASA Astrophysics Data System (ADS)
Salinas, C. C. J.; Chang, L. C.; Liang, M. C.; Qian, L.; Yue, J.; Russell, J. M., III; Mlynczak, M. G.
2017-12-01
This work aims to present the solar cycle variations of SABER CO2 and MLS H2O in the Mesosphere and Lower Thermosphere region. These observations are then compared to SD-WACCM outputs of CO2 and H2O in order to understand their physical mechanisms. After which, we attempt to model their solar cycle variations using the default TIME-GCM and the TIME-GCM with MERRA reanalysis as lower-boundary conditions. Comparing the outputs of the default TIME-GCM and TIME-GCM with MERRA will give us insight into the importance of solar forcing and lower atmospheric forcing on the solar cycle variations of CO2 and H2O. The solar cycle influence in the parameters are calculated by doing a multiple linear regression with the F10.7 index. The solar cycle of SABER CO2 is reliable above 1e-2 mb and below 1e-3 mb. Preliminary results from the observations show that SABER CO2 has a stronger negative anomaly due to the solar cycle over the winter hemisphere. MLS H2O is reliable until 1e-2. Preliminary results from the observations show that MLS H2O also has a stronger negative anomaly due to the solar cycle over the winter hemisphere. Both SD-WACCM and the default TIME-GCM reproduce these stronger anomalies over the winter hemisphere. An analysis of the tendency equations in SD-WACCM and default TIME-GCM then reveal that for CO2, the stronger winter anomaly may be attributed to stronger downward transport over the winter hemisphere. For H2O, an analysis of the tendency equations in SD-WACCM reveal that the stronger winter anomaly may be attributed to both stronger downward transport and stronger photochemical loss. On the other hand, in the default TIME-GCM, the stronger winter anomaly in H2O may only be attributed to stronger downward transport. For both models, the stronger downward transport is attributed to enhanced stratospheric polar winter jet during solar maximum. Future work will determine whether setting the lower boundary conditions of TIME-GCM with MERRA will improve the match between TIME-GCM and SD-WACCM. Also, with the TIME-GCM outputs, the influence of these MLT circulation changes on the ionospheric winter anomaly will be determined.
Path Toward a Unifid Geometry for Radiation Transport
NASA Technical Reports Server (NTRS)
Lee, Kerry; Barzilla, Janet; Davis, Andrew; Zachmann
2014-01-01
The Direct Accelerated Geometry for Radiation Analysis and Design (DAGRAD) element of the RadWorks Project under Advanced Exploration Systems (AES) within the Space Technology Mission Directorate (STMD) of NASA will enable new designs and concepts of operation for radiation risk assessment, mitigation and protection. This element is designed to produce a solution that will allow NASA to calculate the transport of space radiation through complex computer-aided design (CAD) models using the state-of-the-art analytic and Monte Carlo radiation transport codes. Due to the inherent hazard of astronaut and spacecraft exposure to ionizing radiation in low-Earth orbit (LEO) or in deep space, risk analyses must be performed for all crew vehicles and habitats. Incorporating these analyses into the design process can minimize the mass needed solely for radiation protection. Transport of the radiation fields as they pass through shielding and body materials can be simulated using Monte Carlo techniques or described by the Boltzmann equation, which is obtained by balancing changes in particle fluxes as they traverse a small volume of material with the gains and losses caused by atomic and nuclear collisions. Deterministic codes that solve the Boltzmann transport equation, such as HZETRN [high charge and energy transport code developed by NASA Langley Research Center (LaRC)], are generally computationally faster than Monte Carlo codes such as FLUKA, GEANT4, MCNP(X) or PHITS; however, they are currently limited to transport in one dimension, which poorly represents the secondary light ion and neutron radiation fields. NASA currently uses HZETRN space radiation transport software, both because it is computationally efficient and because proven methods have been developed for using this software to analyze complex geometries. Although Monte Carlo codes describe the relevant physics in a fully three-dimensional manner, their computational costs have thus far prevented their widespread use for analysis of complex CAD models, leading to the creation and maintenance of toolkit-specific simplistic geometry models. The work presented here builds on the Direct Accelerated Geometry Monte Carlo (DAGMC) toolkit developed for use with the Monte Carlo N-Particle (MCNP) transport code. The workflow for achieving radiation transport on CAD models using MCNP and FLUKA has been demonstrated and the results of analyses on realistic spacecraft/habitats will be presented. Future work is planned that will further automate this process and enable the use of multiple radiation transport codes on identical geometry models imported from CAD. This effort will enhance the modeling tools used by NASA to accurately evaluate the astronaut space radiation risk and accurately determine the protection provided by as-designed exploration mission vehicles and habitats
General relativistic radiative transfer code in rotating black hole space-time: ARTIST
NASA Astrophysics Data System (ADS)
Takahashi, Rohta; Umemura, Masayuki
2017-02-01
We present a general relativistic radiative transfer code, ARTIST (Authentic Radiative Transfer In Space-Time), that is a perfectly causal scheme to pursue the propagation of radiation with absorption and scattering around a Kerr black hole. The code explicitly solves the invariant radiation intensity along null geodesics in the Kerr-Schild coordinates, and therefore properly includes light bending, Doppler boosting, frame dragging, and gravitational redshifts. The notable aspect of ARTIST is that it conserves the radiative energy with high accuracy, and is not subject to the numerical diffusion, since the transfer is solved on long characteristics along null geodesics. We first solve the wavefront propagation around a Kerr black hole that was originally explored by Hanni. This demonstrates repeated wavefront collisions, light bending, and causal propagation of radiation with the speed of light. We show that the decay rate of the total energy of wavefronts near a black hole is determined solely by the black hole spin in late phases, in agreement with analytic expectations. As a result, the ARTIST turns out to correctly solve the general relativistic radiation fields until late phases as t ˜ 90 M. We also explore the effects of absorption and scattering, and apply this code for a photon wall problem and an orbiting hotspot problem. All the simulations in this study are performed in the equatorial plane around a Kerr black hole. The ARTIST is the first step to realize the general relativistic radiation hydrodynamics.
NASA Technical Reports Server (NTRS)
Plante, I; Wu, H
2014-01-01
The code RITRACKS (Relativistic Ion Tracks) has been developed over the last few years at the NASA Johnson Space Center to simulate the effects of ionizing radiations at the microscopic scale, to understand the effects of space radiation at the biological level. The fundamental part of this code is the stochastic simulation of radiation track structure of heavy ions, an important component of space radiations. The code can calculate many relevant quantities such as the radial dose, voxel dose, and may also be used to calculate the dose in spherical and cylindrical targets of various sizes. Recently, we have incorporated DNA structure and damage simulations at the molecular scale in RITRACKS. The direct effect of radiations is simulated by introducing a slight modification of the existing particle transport algorithms, using the Binary-Encounter-Bethe model of ionization cross sections for each molecular orbitals of DNA. The simulation of radiation chemistry is done by a step-by-step diffusion-reaction program based on the Green's functions of the diffusion equation]. This approach is also used to simulate the indirect effect of ionizing radiation on DNA. The software can be installed independently on PC and tablets using the Windows operating system and does not require any coding from the user. It includes a Graphic User Interface (GUI) and a 3D OpenGL visualization interface. The calculations are executed simultaneously (in parallel) on multiple CPUs. The main features of the software will be presented.
User's manual for the ALS base heating prediction code, volume 2
NASA Technical Reports Server (NTRS)
Reardon, John E.; Fulton, Michael S.
1992-01-01
The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.
Intercomparison of three microwave/infrared high resolution line-by-line radiative transfer codes
NASA Astrophysics Data System (ADS)
Schreier, Franz; Milz, Mathias; Buehler, Stefan A.; von Clarmann, Thomas
2018-05-01
An intercomparison of three line-by-line (lbl) codes developed independently for atmospheric radiative transfer and remote sensing - ARTS, GARLIC, and KOPRA - has been performed for a thermal infrared nadir sounding application assuming a HIRS-like (High resolution Infrared Radiation Sounder) setup. Radiances for the 19 HIRS infrared channels and a set of 42 atmospheric profiles from the "Garand dataset" have been computed. The mutual differences of the equivalent brightness temperatures are presented and possible causes of disagreement are discussed. In particular, the impact of path integration schemes and atmospheric layer discretization is assessed. When the continuum absorption contribution is ignored because of the different implementations, residuals are generally in the sub-Kelvin range and smaller than 0.1 K for some window channels (and all atmospheric models and lbl codes). None of the three codes turned out to be perfect for all channels and atmospheres. Remaining discrepancies are attributed to different lbl optimization techniques. Lbl codes seem to have reached a maturity in the implementation of radiative transfer that the choice of the underlying physical models (line shape models, continua etc) becomes increasingly relevant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yazhou; Cheng, Xiaonong; Yen, Clive H.
Graphene cellular monolith (GCM) can be used as an excellent support for nanoparticles in widespread applications. However, it's still a great challenge to deposit the desirable nanoparticles in GCM that have small size, controllable structure, composition, and high dispersion using the current methods. Here we demonstrate a green, efficient and large-scale method to address this challenge using supercritical fluid (SCF). By this superior method, graphene hydrogel can be transferred into GCM while being deposited with ultrafine and highly dispersive nanoparticles. Specifically, the bimetallic PtFe/GCM and the trimetallic PtFeCo/GCM catalysts are successfully synthesized, and their electrocatalytic performances toward oxygen reduction reactionmore » (ORR) are also studied. The resultant PtFe/GCM shows the significant enhancement in ORR activity, including a factor of 8.47 enhancement in mass activity (0.72 A mgPt-1), and a factor of 7.67 enhancement in specific activity (0.92 mA cm-2), comparing with those of the commercial Pt/C catalyst (0.085 A mgPt-1, 0.12 mA cm-2). Importantly, by introducing the Co, the trimetallic PtFeCo/GCM exhibits the further improved ORR activities (1.28 A mgPt-1, 1.80 mA cm-2). The high ORR activity is probably attributed to the alloying structure, ultrafine size, highly dispersive, well-defined, and a better interface with 3D porous graphene support.« less
NASA Astrophysics Data System (ADS)
Zhou, Yazhou; Cheng, Xiaonong; Yen, Clive H.; Wai, Chien M.; Wang, Chongmin; Yang, Juan; Lin, Yuehe
2017-04-01
Graphene cellular monolith (GCM) can be used as an excellent support for nanoparticles in widespread applications. However, it's still a great challenge to deposit the desirable nanoparticles in GCM that have small size, controllable structure, composition, and high dispersion using the current methods. Here we demonstrate a green, efficient and large-scale method to address this challenge using supercritical fluid (SCF). By this superior method, graphene hydrogel can be transferred into GCM while being deposited with ultrafine and highly dispersive nanoparticles. Specifically, the bimetallic PtFe/GCM and the trimetallic PtFeCo/GCM catalysts are successfully synthesized, and their electrocatalytic performances toward oxygen reduction reaction (ORR) are also studied. The resultant PtFe/GCM shows the significant enhancement in ORR activity, including a factor of 8.47 enhancement in mass activity (0.72 A mgPt-1), and a factor of 7.67 enhancement in specific activity (0.92 mA cm-2), comparing with those of the commercial Pt/C catalyst (0.085 A mgPt-1, 0.12 mA cm-2). Importantly, by introducing the Co, the trimetallic PtFeCo/GCM exhibits the further improved ORR activities (1.28 A mgPt-1, 1.80 mA cm-2). The high ORR activity is probably attributed to the alloying structure, ultrafine size, highly dispersive, well-defined, and a better interface with 3D porous graphene support.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Killough, G.G.; Rohwer, P.S.
1974-03-01
INDOS1, INDOS2, and INDOS3 (the INDOS codes) are conversational FORTRAN IV programs, implemented for use in time-sharing mode on the ORNL PDP-10 System. These codes use ICRP10-10A models to estimate the radiation dose to an organ of the body of Reference Man resulting from the ingestion or inhalation of any one of various radionuclides. Two patterns of intake are simulated: intakes at discrete times and continuous intake at a constant rate. The IND0S codes provide tabular output of dose rate and dose vs time, graphical output of dose vs time, and punched-card output of organ burden and dose vs time.more » The models of internal dose calculation are discussed and instructions for the use of the INDOS codes are provided. The INDOS codes are available from the Radiation Shielding Information Center, Oak Ridge National Laboratory, P. O. Box X, Oak Ridge, Tennessee 37830. (auth)« less
A Radiation Chemistry Code Based on the Green's Function of the Diffusion Equation
NASA Technical Reports Server (NTRS)
Plante, Ianik; Wu, Honglu
2014-01-01
Stochastic radiation track structure codes are of great interest for space radiation studies and hadron therapy in medicine. These codes are used for a many purposes, notably for microdosimetry and DNA damage studies. In the last two decades, they were also used with the Independent Reaction Times (IRT) method in the simulation of chemical reactions, to calculate the yield of various radiolytic species produced during the radiolysis of water and in chemical dosimeters. Recently, we have developed a Green's function based code to simulate reversible chemical reactions with an intermediate state, which yielded results in excellent agreement with those obtained by using the IRT method. This code was also used to simulate and the interaction of particles with membrane receptors. We are in the process of including this program for use with the Monte-Carlo track structure code Relativistic Ion Tracks (RITRACKS). This recent addition should greatly expand the capabilities of RITRACKS, notably to simulate DNA damage by both the direct and indirect effect.
Common radiation analysis model for 75,000 pound thrust NERVA engine (1137400E)
NASA Technical Reports Server (NTRS)
Warman, E. A.; Lindsey, B. A.
1972-01-01
The mathematical model and sources of radiation used for the radiation analysis and shielding activities in support of the design of the 1137400E version of the 75,000 lbs thrust NERVA engine are presented. The nuclear subsystem (NSS) and non-nuclear components are discussed. The geometrical model for the NSS is two dimensional as required for the DOT discrete ordinates computer code or for an azimuthally symetrical three dimensional Point Kernel or Monte Carlo code. The geometrical model for the non-nuclear components is three dimensional in the FASTER geometry format. This geometry routine is inherent in the ANSC versions of the QAD and GGG Point Kernal programs and the COHORT Monte Carlo program. Data are included pertaining to a pressure vessel surface radiation source data tape which has been used as the basis for starting ANSC analyses with the DASH code to bridge into the COHORT Monte Carlo code using the WANL supplied DOT angular flux leakage data. In addition to the model descriptions and sources of radiation, the methods of analyses are briefly described.
Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1990-01-01
The continued development and improvement of the viscous shock layer (VSL) nonequilibrium chemistry blunt body engineering code, the incorporation in a coupled manner of radiation models into the VSL code, and the initial development of appropriate precursor models are presented.
Vector radiative transfer code SORD: Performance analysis and quick start guide
NASA Astrophysics Data System (ADS)
Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Alexander; Holben, Brent; Kokhanovsky, Alexander
2017-10-01
We present a new open source polarized radiative transfer code SORD written in Fortran 90/95. SORD numerically simulates propagation of monochromatic solar radiation in a plane-parallel atmosphere over a reflecting surface using the method of successive orders of scattering (hence the name). Thermal emission is ignored. We did not improve the method in any way, but report the accuracy and runtime in 52 benchmark scenarios. This paper also serves as a quick start user's guide for the code available from ftp://maiac.gsfc.nasa.gov/pub/skorkin, from the JQSRT website, or from the corresponding (first) author.
Shortwave and longwave radiative contributions to global warming under increasing CO2.
Donohoe, Aaron; Armour, Kyle C; Pendergrass, Angeline G; Battisti, David S
2014-11-25
In response to increasing concentrations of atmospheric CO2, high-end general circulation models (GCMs) simulate an accumulation of energy at the top of the atmosphere not through a reduction in outgoing longwave radiation (OLR)—as one might expect from greenhouse gas forcing—but through an enhancement of net absorbed solar radiation (ASR). A simple linear radiative feedback framework is used to explain this counterintuitive behavior. It is found that the timescale over which OLR returns to its initial value after a CO2 perturbation depends sensitively on the magnitude of shortwave (SW) feedbacks. If SW feedbacks are sufficiently positive, OLR recovers within merely several decades, and any subsequent global energy accumulation is because of enhanced ASR only. In the GCM mean, this OLR recovery timescale is only 20 y because of robust SW water vapor and surface albedo feedbacks. However, a large spread in the net SW feedback across models (because of clouds) produces a range of OLR responses; in those few models with a weak SW feedback, OLR takes centuries to recover, and energy accumulation is dominated by reduced OLR. Observational constraints of radiative feedbacks—from satellite radiation and surface temperature data—suggest an OLR recovery timescale of decades or less, consistent with the majority of GCMs. Altogether, these results suggest that, although greenhouse gas forcing predominantly acts to reduce OLR, the resulting global warming is likely caused by enhanced ASR.
NASA Astrophysics Data System (ADS)
Kuo, C. P.; Yang, P.; Huang, X.; Feldman, D.; Flanner, M.; Kuo, C.; Mlawer, E. J.
2017-12-01
Clouds, which cover approximately 67% of the globe, serve as one of the major modulators in adjusting radiative energy on the Earth. Since rigorous radiative transfer computations including multiple scattering are costly, only absorption is considered in the longwave spectral bands in the radiation sub-models of the general circulation models (GCMs). Quantification of the effect of ignoring longwave scattering for flux and heating rate simulations is performed by using the GCM version of the Longwave Rapid Radiative Transfer Model (RRTMG_LW) with an implementation with the 16-stream Discrete Ordinates Radiative Transfer (DISORT) Program for a Multi-Layered Plane-Parallel Medium in conjunction with the 2010 CCCM products that merge satellite observations from the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO), the CloudSat, the Clouds and the Earth's Radiant Energy System (CERES) and the Moderate Resolution Imaging Spectrometer (MODIS). One-year global simulations show that neglecting longwave scattering overestimates upward flux at the top of the atmosphere (TOA) and underestimates downward flux at the surface by approximately 2.63 and 1.15 W/m2, respectively. Furthermore, when longwave scattering is included in the simulations, the tropopause is cooled by approximately 0.018 K/day and the surface is heated by approximately 0.028 K/day. As a result, the radiative effects of ignoring longwave scattering and doubling CO2 are comparable in magnitude.
Kinetic neoclassical calculations of impurity radiation profiles
Stotler, D. P.; Battaglia, D. J.; Hager, R.; ...
2016-12-30
Modifications of the drift-kinetic transport code XGC0 to include the transport, ionization, and recombination of individual charge states, as well as the associated radiation, are described. The code is first applied to a simulation of an NSTX H-mode discharge with carbon impurity to demonstrate the approach to coronal equilibrium. The effects of neoclassical phenomena on the radiated power profile are examined sequentially through the activation of individual physics modules in the code. Orbit squeezing and the neoclassical inward pinch result in increased radiation for temperatures above a few hundred eV and changes to the ratios of charge state emissions atmore » a given electron temperature. As a result, analogous simulations with a neon impurity yield qualitatively similar results.« less
Comparison of Stopping Power and Range Databases for Radiation Transport Study
NASA Technical Reports Server (NTRS)
Tai, H.; Bichsel, Hans; Wilson, John W.; Shinn, Judy L.; Cucinotta, Francis A.; Badavi, Francis F.
1997-01-01
The codes used to calculate stopping power and range for the space radiation shielding program at the Langley Research Center are based on the work of Ziegler but with modifications. As more experience is gained from experiments at heavy ion accelerators, prudence dictates a reevaluation of the current databases. Numerical values of stopping power and range calculated from four different codes currently in use are presented for selected ions and materials in the energy domain suitable for space radiation transport. This study of radiation transport has found that for most collision systems and for intermediate particle energies, agreement is less than 1 percent, in general, among all the codes. However, greater discrepancies are seen for heavy systems, especially at low particle energies.
Group Combustion Module (GCM) Installation
2016-09-27
ISS049e011638 (09/27/2016) --- Expedition 49 crewmember Takuya Onishi of JAXA works on the setup of the Group Combustion Module (GCM) inside the Japanese Experiment Module. The GCM will be used to house the Group Combustion experiment from the Japan Aerospace Exploration Agency (JAXA) to test a theory that fuel sprays change from partial to group combustion as flames spread across a cloud of droplets.
Projecting the spatiotemporal carbon dynamics of the Greater Yellowstone Ecosystem from 2006 to 2050
Huang, Shengli; Liu, Shuguang; Liu, Jinxun; Dahal, Devendra; Young, Claudia; Davis, Brian; Sohl, Terry L.; Hawbaker, Todd J.; Sleeter, Benjamin M.; Zhu, Zhiliang
2015-01-01
BackgroundClimate change and the concurrent change in wildfire events and land use comprehensively affect carbon dynamics in both spatial and temporal dimensions. The purpose of this study was to project the spatial and temporal aspects of carbon storage in the Greater Yellowstone Ecosystem (GYE) under these changes from 2006 to 2050. We selected three emission scenarios and produced simulations with the CENTURY model using three General Circulation Models (GCMs) for each scenario. We also incorporated projected land use change and fire occurrence into the carbon accounting.ResultsThe three GCMs showed increases in maximum and minimum temperature, but precipitation projections varied among GCMs. Total ecosystem carbon increased steadily from 7,942 gC/m2 in 2006 to 10,234 gC/m2 in 2050 with an annual rate increase of 53 gC/m2/year. About 56.6% and 27% of the increasing rate was attributed to total live carbon and total soil carbon, respectively. Net Primary Production (NPP) increased slightly from 260 gC/m2/year in 2006 to 310 gC/m2/year in 2050 with an annual rate increase of 1.22 gC/m2/year. Forest clear-cutting and fires resulted in direct carbon removal; however, the rate was low at 2.44 gC/m2/year during 2006–2050. The area of clear-cutting and wildfires in the GYE would account for 10.87% of total forested area during 2006–2050, but the predictive simulations demonstrated different spatial distributions in national forests and national parks.ConclusionsThe GYE is a carbon sink during 2006–2050. The capability of vegetation is almost double that of soil in terms of sequestering extra carbon. Clear-cutting and wildfires in GYE will affect 10.87% of total forested area, but direct carbon removal from clear-cutting and fires is 109.6 gC/m2, which accounts for only 1.2% of the mean ecosystem carbon level of 9,056 gC/m2, and thus is not significant.
Huang, Shengli; Liu, Shuguang; Liu, Jinxun; Dahal, Devendra; Young, Claudia; Davis, Brian; Sohl, Terry L; Hawbaker, Todd J; Sleeter, Ben; Zhu, Zhiliang
2015-12-01
Climate change and the concurrent change in wildfire events and land use comprehensively affect carbon dynamics in both spatial and temporal dimensions. The purpose of this study was to project the spatial and temporal aspects of carbon storage in the Greater Yellowstone Ecosystem (GYE) under these changes from 2006 to 2050. We selected three emission scenarios and produced simulations with the CENTURY model using three General Circulation Models (GCMs) for each scenario. We also incorporated projected land use change and fire occurrence into the carbon accounting. The three GCMs showed increases in maximum and minimum temperature, but precipitation projections varied among GCMs. Total ecosystem carbon increased steadily from 7,942 gC/m 2 in 2006 to 10,234 gC/m 2 in 2050 with an annual rate increase of 53 gC/m 2 /year. About 56.6% and 27% of the increasing rate was attributed to total live carbon and total soil carbon, respectively. Net Primary Production (NPP) increased slightly from 260 gC/m 2 /year in 2006 to 310 gC/m 2 /year in 2050 with an annual rate increase of 1.22 gC/m 2 /year. Forest clear-cutting and fires resulted in direct carbon removal; however, the rate was low at 2.44 gC/m 2 /year during 2006-2050. The area of clear-cutting and wildfires in the GYE would account for 10.87% of total forested area during 2006-2050, but the predictive simulations demonstrated different spatial distributions in national forests and national parks. The GYE is a carbon sink during 2006-2050. The capability of vegetation is almost double that of soil in terms of sequestering extra carbon. Clear-cutting and wildfires in GYE will affect 10.87% of total forested area, but direct carbon removal from clear-cutting and fires is 109.6 gC/m 2 , which accounts for only 1.2% of the mean ecosystem carbon level of 9,056 gC/m 2 , and thus is not significant.
1984-12-01
radiation lengths. The off-axis dose in Silicon was calculated using the electron/photon transport code CYLTRAN and measured using thermal luminescent...various path lengths out to 2 radiation lengths. The cff-axis dose in Silicon was calculated using the electron/photon transport code CYLTRAN and measured... using thermal luminescent dosimeters (TLD’s). Calculations were performed on a CDC-7600 computer at Los Alamos National Laboratory and measurements
NASA Technical Reports Server (NTRS)
Howe, John T.; Yang, Lily
1991-01-01
A heat-shield-material response code predicting the transient performance of a material subject to the combined convective and radiative heating associated with the hypervelocity flight is developed. The code is dynamically interactive to the heating from a transient flow field, including the effects of material ablation on flow field behavior. It accomodates finite time variable material thickness, internal material phase change, wavelength-dependent radiative properties, and temperature-dependent thermal, physical, and radiative properties. The equations of radiative transfer are solved with the material and are coupled to the transfer energy equation containing the radiative flux divergence in addition to the usual energy terms.
NASA Astrophysics Data System (ADS)
Garratt, J. R.; Prata, A. J.
1996-03-01
Previous work suggests that general circulation (global climate) models have excess net radiation at land surfaces, apparently due to overestimates in downwelling shortwave flux and underestimates in upwelling long-wave flux. Part of this excess, however, may be compensated for by an underestimate in downwelling longwave flux. Long term observations of the downwelling longwave component at several land stations in Europe, the United States, Australia, and Antarctica suggest that climate models (four are used, as in previous studies) underestimate this flux component on an annual basis by up to 10 W m2, yet with low statistical significance. It is probable that the known underestimate in boundary-layer air temperature contributes to this, as would low model cloudiness and neglect of minor gases such as methane, nitrogen oxide, and the freons. The bias in downwelling longwave flux, together with those found earlier for downwelling shortwave and upwlling long-wave fluxes, are consistent with the model bias found previously for net radiation. All annually averaged fluxes and biases are deduced for global land as a whole.
Impact of cloud timing on surface temperature and related hydroclimatic dynamics
NASA Astrophysics Data System (ADS)
Porporato, A. M.; Yin, J.
2015-12-01
Cloud feedbacks have long been identified as one of the largest source of uncertainty in climate change predictions. Differences in the spatial distribution of clouds and the related impact on surface temperature and climate dynamics have been recently emphasized in quasi-equilibrium General Circulation Models (GCM). However, much less attention has been paid to the temporal variation of cloud presence and thickness. Clouds in fact shade the solar radiation during the daytime, but also acts as greenhouse gas to reduce the emission of longwave radiation to the outer space anytime of the day. Thus it is logical to expect that even small differences in timing and thickness of clouds could result in very different predictions in GCMs. In this study, these two effects of cloud dynamics are analyzed by tracking the cloud impacts on longwave and shortwave radiation in a minimalist transient thermal balance model of the land surface. The marked changes in surface temperature due to alterations in the timing of onset of clouds demonstrate that capturing temporal variation of cloud at sub-daily scale should be a priority in cloud parameterization schemes in GCMs.
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...
2018-06-14
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Satellite Remote Sensing of Tropical Precipitation and Ice Clouds for GCM Verification
NASA Technical Reports Server (NTRS)
Evans, K. Franklin
2001-01-01
This project, supported by the NASA New Investigator Program, has primarily been funding a graduate student, Darren McKague. Since August 1999 Darren has been working part time at Raytheon, while continuing his PhD research. Darren is planning to finish his thesis work in May 2001, thus some of the work described here is ongoing. The proposed research was to use GOES visible and infrared imager data and SSM/I microwave data to obtain joint distributions of cirrus cloud ice mass and precipitation for a study region in the Eastern Tropical Pacific. These joint distributions of cirrus cloud and rainfall were to be compared to those from the CSU general circulation model to evaluate the cloud microphysical amd cumulus parameterizations in the GCM. Existing algorithms were to be used for the retrieval of cloud ice water path from GOES (Minnis) and rainfall from SSM/I (Wilheit). A theoretical study using radiative transfer models and realistic variations in cloud and precipitation profiles was to be used to estimate the retrieval errors. Due to the unavailability of the GOES satellite cloud retrieval algorithm from Dr. Minnis (a co-PI), there was a change in the approach and emphasis of the project. The new approach was to develop a completely new type of remote sensing algorithm - one to directly retrieve joint probability density functions (pdf's) of cloud properties from multi-dimensional histograms of satellite radiances. The usual approach is to retrieve individual pixels of variables (i.e. cloud optical depth), and then aggregate the information. Only statistical information is actually needed, however, and so a more direct method is desirable. We developed forward radiative transfer models for the SSM/I and GOES channels, originally for testing the retrieval algorithms. The visible and near infrared ice scattering information is obtained from geometric ray tracing of fractal ice crystals (Andreas Macke), while the mid-infrared and microwave scattering is computed with Mie scattering. The radiative transfer is performed with the Spherical Harmonic Discrete Ordinate Method (developed by the PI), and infrared molecular absorption is included with the correlated k-distribution method. The SHDOM radiances have been validated by comparison to version 2 of DISORT (the community "standard" discrete-ordinates radiative transfer model), however we use SHDOM since it is computationally more efficient.
Power Balance and Impurity Studies in TCS
NASA Astrophysics Data System (ADS)
Grossnickle, J. A.; Pietrzyk, Z. A.; Vlases, G. C.
2003-10-01
A "zero-dimension" power balance model was developed based on measurements of absorbed power, radiated power, absolute D_α, temperature, and density for the TCS device. Radiation was determined to be the dominant source of power loss for medium to high density plasmas. The total radiated power was strongly correlated with the Oxygen line radiation. This suggests Oxygen is the dominant radiating species, which was confirmed by doping studies. These also extrapolate to a Carbon content below 1.5%. Determining the source of the impurities is an important question that must be answered for the TCS upgrade. Preliminary indications are that the primary sources of Oxygen are the stainless steel end cones. A Ti gettering system is being installed to reduce this Oxygen source. A field line code has been developed for use in tracking where open field lines terminate on the walls. Output from this code is also used to generate grids for an impurity tracking code.
NASA Technical Reports Server (NTRS)
Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.
1991-01-01
Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.
Estimates of runoff using water-balance and atmospheric general circulation models
Wolock, D.M.; McCabe, G.J.
1999-01-01
The effects of potential climate change on mean annual runoff in the conterminous United States (U.S.) are examined using a simple water-balance model and output from two atmospheric general circulation models (GCMs). The two GCMs are from the Canadian Centre for Climate Prediction and Analysis (CCC) and the Hadley Centre for Climate Prediction and Research (HAD). In general, the CCC GCM climate results in decreases in runoff for the conterminous U.S., and the HAD GCM climate produces increases in runoff. These estimated changes in runoff primarily are the result of estimated changes in precipitation. The changes in mean annual runoff, however, mostly are smaller than the decade-to-decade variability in GCM-based mean annual runoff and errors in GCM-based runoff. The differences in simulated runoff between the two GCMs, together with decade-to-decade variability and errors in GCM-based runoff, cause the estimates of changes in runoff to be uncertain and unreliable.
Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials
Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin
2017-01-01
Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits “0” and “1” to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency‐spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments. PMID:28932671
Thermosphere-Ionosphere-Mesosphere Modeling Using the TIME-GCM
2014-09-30
respectively. The CCM3 is the NCAR Community Climate Model, Version 3.6, a GCM of the troposphere and stratosphere. All models include self-consistent...middle atmosphere version of the NCAR Community Climate Model, (2) the NCAR TIME-GCM, and (3) the Model for Ozone and Related Chemical Tracers (MOZART... troposphere , but the impacts of such events extend well into the mesosphere. The coupled NCAR thermosphere-ionosphere-mesosphere- electrodynamics general
Projecting climate change impacts on hydrology: the potential role of daily GCM output
NASA Astrophysics Data System (ADS)
Maurer, E. P.; Hidalgo, H. G.; Das, T.; Dettinger, M. D.; Cayan, D.
2008-12-01
A primary challenge facing resource managers in accommodating climate change is determining the range and uncertainty in regional and local climate projections. This is especially important for assessing changes in extreme events, which will drive many of the more severe impacts of a changed climate. Since global climate models (GCMs) produce output at a spatial scale incompatible with local impact assessment, different techniques have evolved to downscale GCM output so locally important climate features are expressed in the projections. We compared skill and hydrologic projections using two statistical downscaling methods and a distributed hydrology model. The downscaling methods are the constructed analogues (CA) and the bias correction and spatial downscaling (BCSD). CA uses daily GCM output, and can thus capture GCM projections for changing extreme event occurrence, while BCSD uses monthly output and statistically generates historical daily sequences. We evaluate the hydrologic impacts projected using downscaled climate (from the NCEP/NCAR reanalysis as a surrogate GCM) for the late 20th century with both methods, comparing skill in projecting soil moisture, snow pack, and streamflow at key locations in the Western United States. We include an assessment of a new method for correcting for GCM biases in a hybrid method combining the most important characteristics of both methods.
Tropical cloud feedbacks and natural variability of climate
NASA Technical Reports Server (NTRS)
Miller, R. L.; Del Genio, A. D.
1994-01-01
Simulations of natural variability by two general circulation models (GCMs) are examined. One GCM is a sector model, allowing relatively rapid integration without simplification of the model physics, which would potentially exclude mechanisms of variability. Two mechanisms are found in which tropical surface temperature and sea surface temperature (SST) vary on interannual and longer timescales. Both are related to changes in cloud cover that modulate SST through the surface radiative flux. Over the equatorial ocean, SST and surface temperature vary on an interannual timescale, which is determined by the magnitude of the associated cloud cover anomalies. Over the subtropical ocean, variations in low cloud cover drive SST variations. In the sector model, the variability has no preferred timescale, but instead is characterized by a 'red' spectrum with increasing power at longer periods. In the terrestrial GCM, SST variability associated with low cloud anomalies has a decadal timescale and is the dominant form of global temperature variability. Both GCMs are coupled to a mixed layer ocean model, where dynamical heat transports are prescribed, thus filtering out El Nino-Southern Oscillation (ENSO) and thermohaline circulation variability. The occurrence of variability in the absence of dynamical ocean feedbacks suggests that climatic variability on long timescales can arise from atmospheric processes alone.
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Marrocu, Marino; Pusceddu, Gabriella; Langousis, Andreas; Mascaro, Giuseppe; Caroletti, Giulio
2013-04-01
Within the activities of the EU FP7 CLIMB project (www.climb-fp7.eu), we developed downscaling procedures to reliably assess climate forcing at hydrologically relevant scales, and applied them to six representative hydrological basins located in the Mediterranean region: Riu Mannu and Noce in Italy, Chiba in Tunisia, Kocaeli in Turkey, Thau in France, and Gaza in Palestine. As a first step towards this aim, we used daily precipitation and temperature data from the gridded E-OBS project (www.ecad.eu/dailydata), as reference fields, to rank 14 Regional Climate Model (RCM) outputs from the ENSEMBLES project (http://ensembles-eu.metoffice.com). The four best performing model outputs were selected, with the additional constraint of maintaining 2 outputs obtained from running different RCMs driven by the same GCM, and 2 runs from the same RCM driven by different GCMs. For these four RCM-GCM model combinations, a set of downscaling techniques were developed and applied, for the period 1951-2100, to variables used in hydrological modeling (i.e. precipitation; mean, maximum and minimum daily temperatures; direct solar radiation, relative humidity, magnitude and direction of surface winds). The quality of the final products is discussed, together with the results obtained after applying a bias reduction procedure to daily temperature and precipitation fields.
NASA Technical Reports Server (NTRS)
Joiner, J.; Vasilkov, A. P.; Gupta, Pawan; Bhartia, P. K.; Veefkind, Pepijn; Sneep, Maarten; deHaan, Johan; Polonsky, Igor; Spurr, Robert
2011-01-01
We have developed a relatively simple scheme for simulating retrieved cloud optical centroid pressures (OCP) from satellite solar backscatter observations. We have compared simulator results with those from more detailed retrieval simulators that more fully account for the complex radiative transfer in a cloudy atmosphere. We used this fast simulator to conduct a comprehensive evaluation of cloud OCPs from the two OMI algorithms using collocated data from CloudSat and Aqua MODIS, a unique situation afforded by the A-train formation of satellites. We find that both OMI algorithms perform reasonably well and that the two algorithms agree better with each other than either does with the collocated CloudSat data. This indicates that patchy snow/ice, cloud 3D, and aerosol effects not simulated with the CloudSat data are affecting both algorithms similarly. We note that the collocation with CloudSat occurs mainly on the East side of OMI's swath. Therefore, we are not able to address cross-track biases in OMI cloud OCP retrievals. Our fast simulator may also be used to simulate cloud OCP from output generated by general circulation models (GCM) with appropriate account of cloud overlap. We have implemented such a scheme and plan to compare OMI data with GCM output in the near future.
Effects of topography on simulated net primary productivity at landscape scale.
Chen, X F; Chen, J M; An, S Q; Ju, W M
2007-11-01
Local topography significantly affects spatial variations of climatic variables and soil water movement in complex terrain. Therefore, the distribution and productivity of ecosystems are closely linked to topography. Using a coupled terrestrial carbon and hydrological model (BEPS-TerrainLab model), the topographic effects on the net primary productivity (NPP) are analyzed through four modelling experiments for a 5700 km(2) area in Baohe River basin, Shaanxi Province, northwest of China. The model was able to capture 81% of the variability in NPP estimated from tree rings, with a mean relative error of 3.1%. The average NPP in 2003 for the study area was 741 gCm(-2)yr(-1) from a model run including topographic effects on the distributions of climate variables and lateral flow of ground water. Topography has considerable effect on NPP, which peaks near 1350 m above the sea level. An elevation increase of 100 m above this level reduces the average annual NPP by about 25 gCm(-2). The terrain aspect gives rise to a NPP change of 5% for forests located below 1900 m as a result of its influence on incident solar radiation. For the whole study area, a simulation totally excluding topographic effects on the distributions of climatic variables and ground water movement overestimated the average NPP by 5%.
Carbon Dioxide Convection in the Martian Polar Night and Its Implications for Polar Processes
NASA Technical Reports Server (NTRS)
Colaprete, A.; Haberle, R. M.
2003-01-01
Each Martian year nearly 30% of the atmosphere is exchanged with the polar ice caps. This exchange occurs through a combination of direct surface condensation and atmospheric precipitation of carbon dioxide. It has long been thought the amount of condensation within the polar night is maintained by a balance between diabatic processes such as radiative cooling and latent heating from condensing CO2. This assumption manifests itself in Mars General Circulation Models (GCM) in such a way as to never allow the atmospheric temperature to dip below the saturation temperature of CO2. However, observations from Mars Global Surveyor (MGS) Radio Science (RS) and the Thermal Emission Spectrometer (TES) have demonstrated this assumption to be, at best, approximate. Both RS and TES observations within the polar nights of both poles indicate substantial supersaturated regions with respect to CO2. The observed temperature profiles suggest conditionally unstable regions containing planetary significant amounts of potential convective energy. Presented here are estimates of the total planetary inventory of convective available potential energy (CAPE) and the potential convective energy flux (PCEF). The values for CAPE and PCEF are derived from RS temperature profiles and compared to Mars GCM results using a new convective CO2 cloud model that allows for the formation of CAPE.
GCM simulations of Titan's middle and lower atmosphere and comparison to observations
NASA Astrophysics Data System (ADS)
Lora, Juan M.; Lunine, Jonathan I.; Russell, Joellen L.
2015-04-01
Simulation results are presented from a new general circulation model (GCM) of Titan, the Titan Atmospheric Model (TAM), which couples the Flexible Modeling System (FMS) spectral dynamical core to a suite of external/sub-grid-scale physics. These include a new non-gray radiative transfer module that takes advantage of recent data from Cassini-Huygens, large-scale condensation and quasi-equilibrium moist convection schemes, a surface model with "bucket" hydrology, and boundary layer turbulent diffusion. The model produces a realistic temperature structure from the surface to the lower mesosphere, including a stratopause, as well as satisfactory superrotation. The latter is shown to depend on the dynamical core's ability to build up angular momentum from surface torques. Simulated latitudinal temperature contrasts are adequate, compared to observations, and polar temperature anomalies agree with observations. In the lower atmosphere, the insolation distribution is shown to strongly impact turbulent fluxes, and surface heating is maximum at mid-latitudes. Surface liquids are unstable at mid- and low-latitudes, and quickly migrate poleward. The simulated humidity profile and distribution of surface temperatures, compared to observations, corroborate the prevalence of dry conditions at low latitudes. Polar cloud activity is well represented, though the observed mid-latitude clouds remain somewhat puzzling, and some formation alternatives are suggested.
Regional sea level variability in a high-resolution global coupled climate model
NASA Astrophysics Data System (ADS)
Palko, D.; Kirtman, B. P.
2016-12-01
The prediction of trends at regional scales is essential in order to adapt to and prepare for the effects of climate change. However, GCMs are unable to make reliable predictions at regional scales. The prediction of local sea level trends is particularly critical. The main goal of this research is to utilize high-resolution (HR) (0.1° resolution in the ocean) coupled model runs of CCSM4 to analyze regional sea surface height (SSH) trends. Unlike typical, lower resolution (1.0°) GCM runs these HR runs resolve features in the ocean, like the Gulf Stream, which may have a large effect on regional sea level. We characterize the variability of regional SSH along the Atlantic coast of the US using tide gauge observations along with fixed radiative forcing runs of CCSM4 and HR interactive ensemble runs. The interactive ensemble couples an ensemble mean atmosphere with a single ocean realization. This coupling results in a 30% decrease in the strength of the Atlantic meridional overturning circulation; therefore, the HR interactive ensemble is analogous to a HR hosing experiment. By characterizing the variability in these high-resolution GCM runs and observations we seek to understand what processes influence coastal SSH along the Eastern Coast of the United States and better predict future SLR.
Evaluating the uncertainty of predicting future climate time series at the hourly time scale
NASA Astrophysics Data System (ADS)
Caporali, E.; Fatichi, S.; Ivanov, V. Y.
2011-12-01
A stochastic downscaling methodology is developed to generate hourly, point-scale time series for several meteorological variables, such as precipitation, cloud cover, shortwave radiation, air temperature, relative humidity, wind speed, and atmospheric pressure. The methodology uses multi-model General Circulation Model (GCM) realizations and an hourly weather generator, AWE-GEN. Probabilistic descriptions of factors of change (a measure of climate change with respect to historic conditions) are computed for several climate statistics and different aggregation times using a Bayesian approach that weights the individual GCM contributions. The Monte Carlo method is applied to sample the factors of change from their respective distributions thereby permitting the generation of time series in an ensemble fashion, which reflects the uncertainty of climate projections of future as well as the uncertainty of the downscaling procedure. Applications of the methodology and probabilistic expressions of certainty in reproducing future climates for the periods, 2000 - 2009, 2046 - 2065 and 2081 - 2100, using the 1962 - 1992 period as the baseline, are discussed for the location of Firenze (Italy). The climate predictions for the period of 2000 - 2009 are tested against observations permitting to assess the reliability and uncertainties of the methodology in reproducing statistics of meteorological variables at different time scales.
NASA Astrophysics Data System (ADS)
Montero-Martinez, M. J.; Colorado, G.; Diaz-Gutierrez, D. E.; Salinas-Prieto, J. A.
2017-12-01
It is well known the North American Monsoon (NAM) region is already a very dry region which is under a lot of stress due to the lack of water resources on multiple locations of the area. However, it is very interesting that even under those conditions, the Mexican part of the NAM region is certainly the most productive in Mexico from the agricultural point of view. Thus, it is very important to have realistic climate scenarios for climate variables such as temperature, precipitation, relative humidity, radiation, etc. This study tries to tackle that problem by generating probabilistic climate scenarios using a weighted CMIP5-GCM ensemble approach based on the Xu et al. (2010) technique which is on itself an improved method from the better known Reliability Ensemble Averaging algorithm of Giorgi and Mearns (2002). In addition, it is compared the 20-plus GCMs individual performances and the weighted ensemble versus observed data (CRU TS2.1) by using different metrics and Taylor diagrams. This study focuses on probabilistic results reaching a certain threshold given the fact that those types of products could be of potential use for agricultural applications.
NASA Astrophysics Data System (ADS)
Mehrvand, Masoud; Baghanam, Aida Hosseini; Razzaghzadeh, Zahra; Nourani, Vahid
2017-04-01
Since statistical downscaling methods are the most largely used models to study hydrologic impact studies under climate change scenarios, nonlinear regression models known as Artificial Intelligence (AI)-based models such as Artificial Neural Network (ANN) and Support Vector Machine (SVM) have been used to spatially downscale the precipitation outputs of Global Climate Models (GCMs). The study has been carried out using GCM and station data over GCM grid points located around the Peace-Tampa Bay watershed weather stations. Before downscaling with AI-based model, correlation coefficient values have been computed between a few selected large-scale predictor variables and local scale predictands to select the most effective predictors. The selected predictors are then assessed considering grid location for the site in question. In order to increase AI-based downscaling model accuracy pre-processing has been developed on precipitation time series. In this way, the precipitation data derived from various GCM data analyzed thoroughly to find the highest value of correlation coefficient between GCM-based historical data and station precipitation data. Both GCM and station precipitation time series have been assessed by comparing mean and variances over specific intervals. Results indicated that there is similar trend between GCM and station precipitation data; however station data has non-stationary time series while GCM data does not. Finally AI-based downscaling model have been applied to several GCMs with selected predictors by targeting local precipitation time series as predictand. The consequences of recent step have been used to produce multiple ensembles of downscaled AI-based models.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee; Hu, Shaowen; Nounu, Hatem N.; Cucinotta, Francis A.
2010-01-01
The space radiation environment, particularly solar particle events (SPEs), poses the risk of acute radiation sickness (ARS) to humans; and organ doses from SPE exposure may reach critical levels during extra vehicular activities (EVAs) or within lightly shielded spacecraft. NASA has developed an organ dose projection model using the BRYNTRN with SUMDOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUMDOSE, written in FORTRAN, are a Baryon transport code and an output data processing code, respectively. The ARR code is written in C. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. BRYNTRN code operation requires extensive input preparation. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN in friendly way. A GUI for the Acute Radiation Risk and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules: BRYNTRN, SUMDOSE, and the ARR probabilistic response model. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. The ARRBOD GUI will serve as a proof-of-concept example for future integration of other human space applications risk projection models. The current version of the ARRBOD GUI is a new self-contained product and will have follow-on versions, as options are added: 1) human geometries of MAX/FAX in addition to CAM/CAF; 2) shielding distributions for spacecraft, Mars surface and atmosphere; 3) various space environmental and biophysical models; and 4) other response models to be connected to the BRYNTRN. The major components of the overall system, the subsystem interconnections, and external interfaces are described in this report; and the ARRBOD GUI product is explained step by step in order to serve as a tutorial.
Development of a new version of the Vehicle Protection Factor Code (VPF3)
NASA Astrophysics Data System (ADS)
Jamieson, Terrance J.
1990-10-01
The Vehicle Protection Factor (VPF) Code is an engineering tool for estimating radiation protection afforded by armoured vehicles and other structures exposed to neutron and gamma ray radiation from fission, thermonuclear, and fusion sources. A number of suggestions for modifications have been offered by users of early versions of the code. These include: implementing some of the more advanced features of the air transport rating code, ATR5, used to perform the air over ground radiation transport analyses; allowing the ability to study specific vehicle orientations within the free field; implementing an adjoint transport scheme to reduce the number of transport runs required; investigating the possibility of accelerating the transport scheme; and upgrading the computer automated design (CAD) package used by VPF. The generation of radiation free field fluences for infinite air geometries as required for aircraft analysis can be accomplished by using ATR with the air over ground correction factors disabled. Analysis of the effects of fallout bearing debris clouds on aircraft will require additional modelling of VPF.
Lyapustin, Alexei
2002-09-20
Results of an extensive validation study of the new radiative transfer code SHARM-3D are described. The code is designed for modeling of unpolarized monochromatic radiative transfer in the visible and near-IR spectra in the laterally uniform atmosphere over an arbitrarily inhomogeneous anisotropic surface. The surface boundary condition is periodic. The algorithm is based on an exact solution derived with the Green's function method. Several parameterizations were introduced into the algorithm to achieve superior performance. As a result, SHARM-3D is 2-3 orders of magnitude faster than the rigorous code SHDOM. It can model radiances over large surface scenes for a number of incidence-view geometries simultaneously. Extensive comparisons against SHDOM indicate that SHARM-3D has an average accuracy of better than 1%, which along with the high speed of calculations makes it a unique tool for remote-sensing applications in land surface and related atmospheric radiation studies.
NASA Astrophysics Data System (ADS)
Lyapustin, Alexei
2002-09-01
Results of an extensive validation study of the new radiative transfer code SHARM-3D are described. The code is designed for modeling of unpolarized monochromatic radiative transfer in the visible and near-IR spectra in the laterally uniform atmosphere over an arbitrarily inhomogeneous anisotropic surface. The surface boundary condition is periodic. The algorithm is based on an exact solution derived with the Green ’s function method. Several parameterizations were introduced into the algorithm to achieve superior performance. As a result, SHARM-3D is 2 -3 orders of magnitude faster than the rigorous code SHDOM. It can model radiances over large surface scenes for a number of incidence-view geometries simultaneously. Extensive comparisons against SHDOM indicate that SHARM-3D has an average accuracy of better than 1%, which along with the high speed of calculations makes it a unique tool for remote-sensing applications in land surface and related atmospheric radiation studies.
Evaluation of Transport in the Lower Tropical Stratosphere in a Global Chemistry and Transport Model
NASA Technical Reports Server (NTRS)
Douglass, Anne R.; Schoeberl, Mark R.; Rood, Richard B.; Pawson, Steven
2002-01-01
A general circulation model (GCM) relies on various physical parameterizations and provides a solution to the atmospheric equations of motion. A data assimilation system (DAS) combines information from observations with a GCM forecast and produces analyzed meteorological fields that represent the observed atmospheric state. An off-line chemistry and transport model (CTM) can use winds and temperatures from a either a GCM or a DAS. The latter application is in common usage for interpretation of observations from various platforms under the assumption that the DAS transport represents the actual atmospheric transport. Here we compare the transport produced by a DAS with that produced by the particular GCM that is combined with observations to produce the analyzed fields. We focus on transport in the tropics and middle latitudes by comparing the age-of-air inferred from observations of SF6 and CO2 with the age-of-air calculated using GCM fields and DAS fields. We also compare observations of ozone, total reactive nitrogen, and methane with results from the two simulations. These comparisons show that DAS fields produce rapid upward tropical transport and excessive mixing between the tropics and middle latitudes. The unrealistic transport produced by the DAS fields may be due to implicit forcing that is required by the assimilation process when there is bias between the GCM forecast and observations that are combined to produce the analyzed fields. For example, the GCM does not produce a quasi-biennial oscillation (QBO). The QBO is present in the analyzed fields because it is present in the observations, and systematic implicit forcing is required by the DAS. Any systematic bias between observations and the GCM forecast used to produce the DAS analysis is likely to corrupt the transport produced by the analyzed fields. Evaluation of transport in the lower tropical stratosphere in a global chemistry and transport model.
Effects of badminton and ice hockey on bone mass in young males: a 12-year follow-up.
Tervo, Taru; Nordström, Peter; Nordström, Anna
2010-09-01
The purpose of the present study was to investigate the influence of different types of weight bearing physical activity on bone mineral density (BMD, g/cm(2)) and evaluate any residual benefits after the active sports career. Beginning at 17 years of age, BMD was measured 5 times, during 12 years, in 19 badminton players, 48 ice hockey players, and 25 controls. During the active career, badminton players gained significantly more BMD compared to ice hockey players at all sites: in their femoral neck (mean difference (Delta) 0.06 g/cm(2), p=0.04), humerus (Delta 0.06 g/cm(2), p=0.01), lumbar spine (Delta 0.08 g/cm(2), p=0.01), and their legs (Delta 0.05 g/cm(2), p=0.003), after adjusting for age at baseline, changes in weight, height, and active years. BMD gains in badminton players were higher also compared to in controls at all sites (Delta 0.06-0.17 g/cm(2), p<0.01 for all). Eleven badminton players and 37 ice hockey players stopped their active career a mean of 6 years before the final follow-up. Both these groups lost significantly more BMD at the femoral neck and lumbar spine compared to the control group (Delta 0.05-0.12 g/cm(2), p<0.05 for all). At the final follow-up, badminton players had significantly higher BMD of the femoral neck, humerus, lumbar spine, and legs (Delta 0.08-0.20 g/cm(2), p<0.01 for all) than both ice hockey players and controls. In summary, the present study may suggest that badminton is a more osteogenic sport compared to ice hockey. The BMD benefits from previous training were partially sustained with reduced activity. Copyright 2010 Elsevier Inc. All rights reserved.
[Warm needling combined with element calcium for postmenopausal osteoporosis].
Cai, Guowei; Li, Jing; Xue, Yuazhi; Li, Gang; Wu, Man; Li, Pengfei
2015-09-01
To observe the clinical effectiveness of warm needling combined with element calcium on postmenopausal osteoporosis, and to explore its action mechanism. Eighty-five postmenopausal patients were randomly divided into an observation group (43 cases) and a control group (42 cases). Both the two groups were treated with oral administration of caltrate-D tablet, 600 mg per day, once a day before sleep for one year. Patients in the observation group were treated with warm needling at Dazhu (BL 11), Shenshu (BL 23), Xuan-zhong (GB 39), once a day; 30 days of treatment were taken as a course, and totally 4 courses were given with an interval of 60 days between courses. The bone mineral density (BMD) of the lumbar vertebra and hip joint, and the level of serum bone gla protein (S-BGP) and hydroxyproline/creatinine (Hyp/Cr) were observed before and after treatment in the two groups. (1)After treatment, the BMD in the observation group was significantly increased [lumbar vertebra (0. 811±0. 024) g/cm2 vs (0. 892±0.019) g/cm2, femoral neck (0. 512±0.014) g/cm2 vs (0. 554±0. 015) g/cm2, femoral trochanter (0. 716±0. 028) g/cm2 vs (0.769±0.026) g/cm2, Ward's trigonum (0. 590±0. 013) g/cm2 vs (0. 660±0. 017) g/cm2, all P<0. 05)]; the improvement in the observation group was more significant than that in the control group (all P<0. 05). (2)After treatment, the index of bone metabolism in the control group was increased, and the serum S-BGP, the Hyp/Cr in the control group were higher than those in the observation group (both P<0. 05). The treatment of warm needling combined with element calcium on postmenopausal osteoporosis is significant, which is likely to be achieved by reducing the bone metabolism of postmenopausal patients.
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
HBOI Underwater Imaging and Communication Research - Phase 1
2012-04-19
validation of one-way pulse stretching radiative transfer code The objective was to develop and validate time-resolved radiative transfer models that...and validation of one-way pulse stretching radiative transfer code The models were subjected to a series of validation experiments over 12.5 meter...about the theoretical basis of the model together with validation results can be found in Dalgleish et al., (20 1 0). Forward scattering Mueller
A multigroup radiation diffusion test problem: Comparison of code results with analytic solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shestakov, A I; Harte, J A; Bolstad, J H
2006-12-21
We consider a 1D, slab-symmetric test problem for the multigroup radiation diffusion and matter energy balance equations. The test simulates diffusion of energy from a hot central region. Opacities vary with the cube of the frequency and radiation emission is given by a Wien spectrum. We compare results from two LLNL codes, Raptor and Lasnex, with tabular data that define the analytic solution.
Possibility of weather and climate change by active experiments
NASA Astrophysics Data System (ADS)
Avakyan, Sergey; Voronin, Nikolai; Troitsky, Arkadil; Chernouss, Sergey
The anonymous remote impact on weather and climatic characteristics permanently discussed in last decade despite the fact that the UN Convention forbid to use the weather as a weapon since the 1970's. For example, Ross N. Hoffman proposed to operate weather conditions by direct flux of microwave radiation from space. This flux could affects on water vapor in the troposphere. The development of an optically thin cirrus cloud is especially promising situation because even the formation of the aeroplane cirrus-track can stimulate disturbance, which is necessary to development of an initial cyclone stage. Our studies confirmed the results of experiments of NIRFI on sporadic appearance of the microwave radiation of ionospheric nature during periods of solar flares and geomagnetic storms, and also during work of the "Sura” ionospheric heating facility. Such microwave radiation also occurs, when precipitation of particles from radiation belts stimulated by work of powerful (˜ 1 MW) navigation transmitters at frequencies ˜ 5 - 22 kHz. This effect was discovered by measurements at the Intercosmos satellite Bulgaria-1300 in 1982, and recently was confirmed by the spacecraft DEMETER measurements Leningrad State University measurements 1990-1991 at altitude about 2100 m proved the impact of microwave radiation from solar radiobursts on the amount of water vapor in the upper troposphere column. 25 - 40% of the vapour are involved into the formation of clusters decreased an atmospheric transparency. Papers of State Optical Institute (2008) proposed to account the electron-stimulated precipitation from the radiation belts over powerful radio transmitters (registered on the spacecraft DEMETER) as an additional source of microwave radiation of the ionosphere. This source can participate in the condensation-cluster mechanism changes of atmospheric transparency by the same way as natural geomagnetic storm. (Grach et al) also recorded stream microwave ionospheric disturbance stimulated by HF heating in an experiment at the "Sura" even earlier (2002) This led to the appearance of Rydberg states exited by the accelerated electrons impact (Troitskii et al.) found that at the threshold of sensitivity of radiometric measurements in 0.006 g/cm(2) observed a decrease in the water vapor content in the troposphere at 0.05 g/cm(2) at a total natural content 1.8-2.1 g/cm(2) in a special experiment on the basis of "Sura" facility to study the cluster-condensation mechanism. These reductions were observed almost simultaneously with the work of facility and time delay was about 1 minute. It should be noted that the heating power was 20 times less than the maximal reached power in such facilities. The extending of the experimental possibilities on the clustering in the troposphere by ionospheric microwave radiation (SPbSU) supposedly can give us the same result as an active impact on the ionosphere by heating facilities and power transmitters. We believe that manifestation of the described effects give a contribution to change of climatic characteristics: cloud formation, cyclogenesis, temperature anomalies and precipitation. This follows from the results of the analysis of correlation between cloud cover, temperature and precipitation and solar-geomagnetic activity over secular and annual (2 - 5 years) scales. Authors propose to use an optical method for detecting emissions of atomic oxygen in those electronic transitions between Rydberg states, which wavelengths are located in the atmospheric spectral windows in the visible and IR ranges. It will be the test for contribution of the Rydberg excitation processes in the formation of the flux of microwave active effects of the ionosphere. Corresponding lines for the visible region of the spectrum in low-lying Rydberg levels (with principal quantum number n of about 10) are in the blue region of the spectrum: 448.4 nm (the electronic transition is 11d - 3p), 452,3 nm (10d - 3p), and 457.7 nm (9d - 3p). Application of the optical recording channel in active experiments (i.e., at a fixed space-time artificial ionospheric disturbances) allow: - to confirm experimentally the Rydberg channel of generating microwave fluxes from the ionosphere at its perturbations; - to offer remote monitoring as international control of sources of artificial influence on weather and climatic characteristics.
Arking, A.; Ridgeway, B.; Clough, T.; Iacono, M.; Fomin, B.; Trotsenko, A.; Freidenreich, S.; Schwarzkopf, D.
1994-01-01
The intercomparison of Radiation Codes in Climate Models (ICRCCM) study was launched under the auspices of the World Meteorological Organization and with the support of the U.S. Department of Energy to document differences in results obtained with various radiation codes and radiation parameterizations in general circulation models (GCMs). ICRCCM produced benchmark, longwave, line-by-line (LBL) fluxes that may be compared against each other and against models of lower spectral resolution. During ICRCCM, infrared fluxes and cooling rates for several standard model atmospheres with varying concentrations of water vapor, carbon dioxide, and ozone were calculated with LBL methods at resolutions of 0.01 cm-1 or higher. For comparison with other models, values were summed for the IR spectrum and given at intervals of 5 or 10 cm-1. This archive contains fluxes for ICRCCM-prescribed clear-sky cases. Radiative flux and cooling-rate profiles are given for specified atmospheric profiles for temperature, water vapor, and ozone-mixing ratios. The archive contains 328 files, including spectral summaries, formatted data files, and a variety of programs (i.e., C-shell scripts, FORTRAN codes, and IDL programs) to read, reformat, and display data. Collectively, these files require approximately 59 MB of disk space.
ASTRORAY: General relativistic polarized radiative transfer code
NASA Astrophysics Data System (ADS)
Shcherbakov, Roman V.
2014-07-01
ASTRORAY employs a method of ray tracing and performs polarized radiative transfer of (cyclo-)synchrotron radiation. The radiative transfer is conducted in curved space-time near rotating black holes described by Kerr-Schild metric. Three-dimensional general relativistic magneto hydrodynamic (3D GRMHD) simulations, in particular performed with variations of the HARM code, serve as an input to ASTRORAY. The code has been applied to reproduce the sub-mm synchrotron bump in the spectrum of Sgr A*, and to test the detectability of quasi-periodic oscillations in its light curve. ASTRORAY can be readily applied to model radio/sub-mm polarized spectra of jets and cores of other low-luminosity active galactic nuclei. For example, ASTRORAY is uniquely suitable to self-consistently model Faraday rotation measure and circular polarization fraction in jets.
An integrated radiation physics computer code system.
NASA Technical Reports Server (NTRS)
Steyn, J. J.; Harris, D. W.
1972-01-01
An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.
Validity of the Aluminum Equivalent Approximation in Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Badavi, Francis F.; Adams, Daniel O.; Wilson, John W.
2009-01-01
The origin of the aluminum equivalent shield approximation in space radiation analysis can be traced back to its roots in the early years of the NASA space programs (Mercury, Gemini and Apollo) wherein the primary radiobiological concern was the intense sources of ionizing radiation causing short term effects which was thought to jeopardize the safety of the crew and hence the mission. Herein, it is shown that the aluminum equivalent shield approximation, although reasonably well suited for that time period and to the application for which it was developed, is of questionable usefulness to the radiobiological concerns of routine space operations of the 21 st century which will include long stays onboard the International Space Station (ISS) and perhaps the moon. This is especially true for a risk based protection system, as appears imminent for deep space exploration where the long-term effects of Galactic Cosmic Ray (GCR) exposure is of primary concern. The present analysis demonstrates that sufficiently large errors in the interior particle environment of a spacecraft result from the use of the aluminum equivalent approximation, and such approximations should be avoided in future astronaut risk estimates. In this study, the aluminum equivalent approximation is evaluated as a means for estimating the particle environment within a spacecraft structure induced by the GCR radiation field. For comparison, the two extremes of the GCR environment, the 1977 solar minimum and the 2001 solar maximum, are considered. These environments are coupled to the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport (HZETRN), which propagates the GCR spectra for elements with charges (Z) in the range I <= Z <= 28 (H -- Ni) and secondary neutrons through selected target materials. The coupling of the GCR extremes to HZETRN allows for the examination of the induced environment within the interior' of an idealized spacecraft as approximated by a spherical shell shield, and the effects of the aluminum equivalent approximation for a good polymeric shield material such as genetic polyethylene (PE). The shield thickness is represented by a 25 g/cm spherical shell. Although one could imagine the progression to greater thickness, the current range will be sufficient to evaluate the qualitative usefulness of the aluminum equivalent approximation. Upon establishing the inaccuracies of the aluminum equivalent approximation through numerical simulations of the GCR radiation field attenuation for PE and aluminum equivalent PE spherical shells, we Anther present results for a limited set of commercially available, hydrogen rich, multifunctional polymeric constituents to assess the effect of the aluminum equivalent approximation on their radiation attenuation response as compared to the generic PE.
Development of a Space Radiation Monte-Carlo Computer Simulation Based on the FLUKE and Root Codes
NASA Technical Reports Server (NTRS)
Pinsky, L. S.; Wilson, T. L.; Ferrari, A.; Sala, Paola; Carminati, F.; Brun, R.
2001-01-01
The radiation environment in space is a complex problem to model. Trying to extrapolate the projections of that environment into all areas of the internal spacecraft geometry is even more daunting. With the support of our CERN colleagues, our research group in Houston is embarking on a project to develop a radiation transport tool that is tailored to the problem of taking the external radiation flux incident on any particular spacecraft and simulating the evolution of that flux through a geometrically accurate model of the spacecraft material. The output will be a prediction of the detailed nature of the resulting internal radiation environment within the spacecraft as well as its secondary albedo. Beyond doing the physics transport of the incident flux, the software tool we are developing will provide a self-contained stand-alone object-oriented analysis and visualization infrastructure. It will also include a graphical user interface and a set of input tools to facilitate the simulation of space missions in terms of nominal radiation models and mission trajectory profiles. The goal of this project is to produce a code that is considerably more accurate and user-friendly than existing Monte-Carlo-based tools for the evaluation of the space radiation environment. Furthermore, the code will be an essential complement to the currently existing analytic codes in the BRYNTRN/HZETRN family for the evaluation of radiation shielding. The code will be directly applicable to the simulation of environments in low earth orbit, on the lunar surface, on planetary surfaces (including the Earth) and in the interplanetary medium such as on a transit to Mars (and even in the interstellar medium). The software will include modules whose underlying physics base can continue to be enhanced and updated for physics content, as future data become available beyond the timeframe of the initial development now foreseen. This future maintenance will be available from the authors of FLUKA as part of their continuing efforts to support the users of the FLUKA code within the particle physics community. In keeping with the spirit of developing an evolving physics code, we are planning as part of this project, to participate in the efforts to validate the core FLUKA physics in ground-based accelerator test runs. The emphasis of these test runs will be the physics of greatest interest in the simulation of the space radiation environment. Such a tool will be of great value to planners, designers and operators of future space missions, as well as for the design of the vehicles and habitats to be used on such missions. It will also be of aid to future experiments of various kinds that may be affected at some level by the ambient radiation environment, or in the analysis of hybrid experiment designs that have been discussed for space-based astronomy and astrophysics. The tool will be of value to the Life Sciences personnel involved in the prediction and measurement of radiation doses experienced by the crewmembers on such missions. In addition, the tool will be of great use to the planners of experiments to measure and evaluate the space radiation environment itself. It can likewise be useful in the analysis of safe havens, hazard migration plans, and NASA's call for new research in composites and to NASA engineers modeling the radiation exposure of electronic circuits. This code will provide an important complimentary check on the predictions of analytic codes such as BRYNTRN/HZETRN that are presently used for many similar applications, and which have shortcomings that are more easily overcome with Monte Carlo type simulations. Finally, it is acknowledged that there are similar efforts based around the use of the GEANT4 Monte-Carlo transport code currently under development at CERN. It is our intention to make our software modular and sufficiently flexible to allow the parallel use of either FLUKA or GEANT4 as the physics transport engine.
NASA Technical Reports Server (NTRS)
Pawson, Steven; Stolarski, Richard S.; Nielsen, J. Eric; Duncan, Bryan N.
2008-01-01
Version 1 of the Goddard Earth Observing System Chemistry-Climate Model (GEOS CCM) was used in the first CCMVa1 model evaluation and forms the basis for several studies of links between ozone and the circulation. That version of the CCM was based on the GEOS-4 GCM. Versions 2 and 3 of the GEOS CCM are based on the GEOS-5 GCM, which retains the "Lin-Rood" dynamical core but has a totally different set of physical parameterizatiOns to GEOS-4. In Version 2 of the GEOS CCM the Goddard stratospheric chemistry module is retained. Difference between Versions 1 and 2 thus reflect the physics changes of the underlying GCMs. Several comparisons between these two models are made, several of which reveal improvements in Version 2 (including a more realistic representation of the interannual variability of the Antarctic vortex). In Version 3 of the GEOS CCM, the stratospheric chemistry mechanism is replaced by the "GMI COMBO" code that includes tropospheric chemistry and different computational approaches. An advantage of this model version. is the reduction of high ozone biases that prevail at low chlorine loadings in Versions 1 and 2. This poster will compare and contrast various aspects of the three model versions that are relevant for understanding interactions between ozone and climate.
A System of Conservative Regridding for Ice-Atmosphere Coupling in a General Circulation Model (GCM)
NASA Technical Reports Server (NTRS)
Fischer, R.; Nowicki, S.; Kelley, M.; Schmidt, G. A.
2014-01-01
The method of elevation classes, in which the ice surface model is run at multiple elevations within each grid cell, has proven to be a useful way for a low-resolution atmosphere inside a general circulation model (GCM) to produce high-resolution downscaled surface mass balance fields for use in one-way studies coupling atmospheres and ice flow models. Past uses of elevation classes have failed to conserve mass and energy because the transformation used to regrid to the atmosphere was inconsistent with the transformation used to downscale to the ice model. This would cause problems for two-way coupling. A strategy that resolves this conservation issue has been designed and is presented here. The approach identifies three grids between which data must be regridded and five transformations between those grids required by a typical coupled atmosphere-ice flow model. This paper develops a theoretical framework for the problem and shows how each of these transformations may be achieved in a consistent, conservative manner. These transformations are implemented in Glint2, a library used to couple atmosphere models with ice models. Source code and documentation are available for download. Confounding real-world issues are discussed, including the use of projections for ice modeling, how to handle dynamically changing ice geometry, and modifications required for finite element ice models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klasky, Marc Louis; Myers, Steven Charles; James, Michael R.
To facilitate the timely execution of System Threat Reviews (STRs) for DNDO, and also to develop a methodology for performing STRs, LANL performed comparisons of several radiation transport codes (MCNP, GADRAS, and Gamma-Designer) that have been previously utilized to compute radiation signatures. While each of these codes has strengths, it is of paramount interest to determine the limitations of each of the respective codes and also to identify the most time efficient means by which to produce computational results, given the large number of parametric cases that are anticipated in performing STR's. These comparisons serve to identify regions of applicabilitymore » for each code and provide estimates of uncertainty that may be anticipated. Furthermore, while performing these comparisons, examination of the sensitivity of the results to modeling assumptions was also examined. These investigations serve to enable the creation of the LANL methodology for performing STRs. Given the wide variety of radiation test sources, scenarios, and detectors, LANL calculated comparisons of the following parameters: decay data, multiplicity, device (n,γ) leakages, and radiation transport through representative scenes and shielding. This investigation was performed to understand potential limitations utilizing specific codes for different aspects of the STR challenges.« less
MODTRAN6: a major upgrade of the MODTRAN radiative transfer code
NASA Astrophysics Data System (ADS)
Berk, Alexander; Conforti, Patrick; Kennett, Rosemary; Perkins, Timothy; Hawes, Frederick; van den Bosch, Jeannette
2014-06-01
The MODTRAN6 radiative transfer (RT) code is a major advancement over earlier versions of the MODTRAN atmospheric transmittance and radiance model. This version of the code incorporates modern software ar- chitecture including an application programming interface, enhanced physics features including a line-by-line algorithm, a supplementary physics toolkit, and new documentation. The application programming interface has been developed for ease of integration into user applications. The MODTRAN code has been restructured towards a modular, object-oriented architecture to simplify upgrades as well as facilitate integration with other developers' codes. MODTRAN now includes a line-by-line algorithm for high resolution RT calculations as well as coupling to optical scattering codes for easy implementation of custom aerosols and clouds.
NASA Astrophysics Data System (ADS)
Grosskopf, M. J.; Drake, R. P.; Trantham, M. R.; Kuranz, C. C.; Keiter, P. A.; Rutter, E. M.; Sweeney, R. M.; Malamud, G.
2012-10-01
The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density physics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. CRASH model results have shown good agreement with a experimental results from a variety of applications, including: radiative shock, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL), collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.
A Radiation Solver for the National Combustion Code
NASA Technical Reports Server (NTRS)
Sockol, Peter M.
2015-01-01
A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.
Kotchenova, Svetlana Y; Vermote, Eric F
2007-07-10
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.
2007-07-01
This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.
Modeling radiation belt dynamics using a 3-D layer method code
NASA Astrophysics Data System (ADS)
Wang, C.; Ma, Q.; Tao, X.; Zhang, Y.; Teng, S.; Albert, J. M.; Chan, A. A.; Li, W.; Ni, B.; Lu, Q.; Wang, S.
2017-08-01
A new 3-D diffusion code using a recently published layer method has been developed to analyze radiation belt electron dynamics. The code guarantees the positivity of the solution even when mixed diffusion terms are included. Unlike most of the previous codes, our 3-D code is developed directly in equatorial pitch angle (α0), momentum (p), and L shell coordinates; this eliminates the need to transform back and forth between (α0,p) coordinates and adiabatic invariant coordinates. Using (α0,p,L) is also convenient for direct comparison with satellite data. The new code has been validated by various numerical tests, and we apply the 3-D code to model the rapid electron flux enhancement following the geomagnetic storm on 17 March 2013, which is one of the Geospace Environment Modeling Focus Group challenge events. An event-specific global chorus wave model, an AL-dependent statistical plasmaspheric hiss wave model, and a recently published radial diffusion coefficient formula from Time History of Events and Macroscale Interactions during Substorms (THEMIS) statistics are used. The simulation results show good agreement with satellite observations, in general, supporting the scenario that the rapid enhancement of radiation belt electron flux for this event results from an increased level of the seed population by radial diffusion, with subsequent acceleration by chorus waves. Our results prove that the layer method can be readily used to model global radiation belt dynamics in three dimensions.
NASA Astrophysics Data System (ADS)
Kotchenova, Svetlana Y.; Vermote, Eric F.; Matarrese, Raffaella; Klemm, Frank J., Jr.
2006-09-01
A vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), which enables accounting for radiation polarization, has been developed and validated against a Monte Carlo code, Coulson's tabulated values, and MOBY (Marine Optical Buoy System) water-leaving reflectance measurements. The developed code was also tested against the scalar codes SHARM, DISORT, and MODTRAN to evaluate its performance in scalar mode and the influence of polarization. The obtained results have shown a good agreement of 0.7% in comparison with the Monte Carlo code, 0.2% for Coulson's tabulated values, and 0.001-0.002 for the 400-550 nm region for the MOBY reflectances. Ignoring the effects of polarization led to large errors in calculated top-of-atmosphere reflectances: more than 10% for a molecular atmosphere and up to 5% for an aerosol atmosphere. This new version of 6S is intended to replace the previous scalar version used for calculation of lookup tables in the MODIS (Moderate Resolution Imaging Spectroradiometer) atmospheric correction algorithm.
Kotchenova, Svetlana Y; Vermote, Eric F; Matarrese, Raffaella; Klemm, Frank J
2006-09-10
A vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), which enables accounting for radiation polarization, has been developed and validated against a Monte Carlo code, Coulson's tabulated values, and MOBY (Marine Optical Buoy System) water-leaving reflectance measurements. The developed code was also tested against the scalar codes SHARM, DISORT, and MODTRAN to evaluate its performance in scalar mode and the influence of polarization. The obtained results have shown a good agreement of 0.7% in comparison with the Monte Carlo code, 0.2% for Coulson's tabulated values, and 0.001-0.002 for the 400-550 nm region for the MOBY reflectances. Ignoring the effects of polarization led to large errors in calculated top-of-atmosphere reflectances: more than 10% for a molecular atmosphere and up to 5% for an aerosol atmosphere. This new version of 6S is intended to replace the previous scalar version used for calculation of lookup tables in the MODIS (Moderate Resolution Imaging Spectroradiometer) atmospheric correction algorithm.
Decadal Variations in Surface Solar Radiation
NASA Astrophysics Data System (ADS)
Wild, M.
2007-05-01
Satellite estimates provide some information on the amount of solar radiation absorbed by the planet back to the 1980s. The amount of solar radiation reaching the Earth surface can be traced further back in time, untill the 1960s at widespread locations and into the first half of the 20th Century at selected sites. These surface sites suggest significant decadal variations in solar radiation incident at the surface, with indication for a widespread dimming from the 1960s up to the mid 1980s, and a recovery thereafter. Indications for changes in surface solar radiation may also be seen in observatinal records of diurnal temperature range, which provide a better global coverage than the radiation measurrements. Trends in diurnal temperature ranges over global land surfaces show, after decades of decline, a distinct tendency to level off since the mid 1980s. This provides further support for a significant shift in surface solar radiation during the 1980s. There is evidence that the changes in surface solar radiation are linked to associated changes in atmospheric aerosol. Variations in scattering sulfur and absorbing black carbon aerosols are in line with the variations in surface solar radiation. This suggests that at least a part of the variations in surface solar radiation should also be seen in the clear sky planetary albedo. Model simulations with a GCM which includes a sophisticated interactive treatment of aerosols and their emission histories (ECHAM5 HAM), can be used to address this issue. The model is shown to be capable of reproducing the reversal from dimming to brightening under cloud-free conditions in many parts of the world, in line with observational evidence. Associated changes can also be seen in the clear sky planetary albedo, albeit of smaller magnitude.
Applications and Improvement of a Coupled, Global and Cloud-Resolving Modeling System
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Chern, J.; Atlas, R.
2005-01-01
Recently Grabowski (2001) and Khairoutdinov and Randall (2001) have proposed the use of 2D CFWs as a "super parameterization" [or multi-scale modeling framework (MMF)] to represent cloud processes within atmospheric general circulation models (GCMs). In the MMF, a fine-resolution 2D CRM takes the place of the single-column parameterization used in conventional GCMs. A prototype Goddard MMF based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM) is now being developed. The prototype includes the fvGCM run at 2.50 x 20 horizontal resolution with 32 vertical layers from the surface to 1 mb and the 2D (x-z) GCE using 64 horizontal and 32 vertical grid points with 4 km horizontal resolution and a cyclic lateral boundary. The time step for the 2D GCE would be 15 seconds, and the fvGCM-GCE coupling frequency would be 30 minutes (i.e. the fvGCM physical time step). We have successfully developed an fvGCM-GCE coupler for this prototype. Because the vertical coordinate of the fvGCM (a terrain-following floating Lagrangian coordinate) is different from that of the GCE (a z coordinate), vertical interpolations between the two coordinates are needed in the coupler. In interpolating fields from the GCE to fvGCM, we use an existing fvGCM finite- volume piecewise parabolic mapping (PPM) algorithm, which conserves the mass, momentum, and total energy. A new finite-volume PPM algorithm, which conserves the mass, momentum and moist static energy in the z coordinate, is being developed for interpolating fields from the fvGCM to the GCE. In the meeting, we will discuss the major differences between the two MMFs (i.e., the CSU MMF and the Goddard MMF). We will also present performance and critical issues related to the MMFs. In addition, we will present multi-dimensional cloud datasets (i.e., a cloud data library) generated by the Goddard MMF that will be provided to the global modeling community to help improve the representation and performance of moist processes in climate models and to improve our understanding of cloud processes globally (the software tools needed to produce cloud statistics and to identify various types of clouds and cloud systems from both high-resolution satellite and model data will be also presented).
User's manual for the Heat Pipe Space Radiator design and analysis Code (HEPSPARC)
NASA Technical Reports Server (NTRS)
Hainley, Donald C.
1991-01-01
A heat pipe space radiatior code (HEPSPARC), was written for the NASA Lewis Research Center and is used for the design and analysis of a radiator that is constructed from a pumped fluid loop that transfers heat to the evaporative section of heat pipes. This manual is designed to familiarize the user with this new code and to serve as a reference for its use. This manual documents the completed work and is intended to be the first step towards verification of the HEPSPARC code. Details are furnished to provide a description of all the requirements and variables used in the design and analysis of a combined pumped loop/heat pipe radiator system. A description of the subroutines used in the program is furnished for those interested in understanding its detailed workings.
NASA Astrophysics Data System (ADS)
Remillard, J.
2015-12-01
Two low-cloud periods from the CAP-MBL deployment of the ARM Mobile Facility at the Azores are selected through a cluster analysis of ISCCP cloud property matrices, so as to represent two low-cloud weather states that the GISS GCM severely underpredicts not only in that region but also globally. The two cases represent (1) shallow cumulus clouds occurring in a cold-air outbreak behind a cold front, and (2) stratocumulus clouds occurring when the region was dominated by a high-pressure system. Observations and MERRA reanalysis are used to derive specifications used for large-eddy simulations (LES) and single-column model (SCM) simulations. The LES captures the major differences in horizontal structure between the two low-cloud fields, but there are unconstrained uncertainties in cloud microphysics and challenges in reproducing W-band Doppler radar moments. The SCM run on the vertical grid used for CMIP-5 runs of the GCM does a poor job of representing the shallow cumulus case and is unable to maintain an overcast deck in the stratocumulus case, providing some clues regarding problems with low-cloud representation in the GCM. SCM sensitivity tests with a finer vertical grid in the boundary layer show substantial improvement in the representation of cloud amount for both cases. GCM simulations with CMIP-5 versus finer vertical gridding in the boundary layer are compared with observations. The adoption of a two-moment cloud microphysics scheme in the GCM is also tested in this framework. The methodology followed in this study, with the process-based examination of different time and space scales in both models and observations, represents a prototype for GCM cloud parameterization improvements.
NASA Technical Reports Server (NTRS)
Ruane, Alex C.; Mcdermid, Sonali P.
2017-01-01
We present the Representative Temperature and Precipitation (T&P) GCM Subsetting Approach developed within the Agricultural Model Intercomparison and Improvement Project (AgMIP) to select a practical subset of global climate models (GCMs) for regional integrated assessment of climate impacts when resource limitations do not permit the full ensemble of GCMs to be evaluated given the need to also focus on impacts sector and economics models. Subsetting inherently leads to a loss of information but can free up resources to explore important uncertainties in the integrated assessment that would otherwise be prohibitive. The Representative T&P GCM Subsetting Approach identifies five individual GCMs that capture a profile of the full ensemble of temperature and precipitation change within the growing season while maintaining information about the probability that basic classes of climate changes (relatively cool/wet, cool/dry, middle, hot/wet, and hot/dry) are projected in the full GCM ensemble. We demonstrate the selection methodology for maize impacts in Ames, Iowa, and discuss limitations and situations when additional information may be required to select representative GCMs. We then classify 29 GCMs over all land areas to identify regions and seasons with characteristic diagonal skewness related to surface moisture as well as extreme skewness connected to snow-albedo feedbacks and GCM uncertainty. Finally, we employ this basic approach to recognize that GCM projections demonstrate coherence across space, time, and greenhouse gas concentration pathway. The Representative T&P GCM Subsetting Approach provides a quantitative basis for the determination of useful GCM subsets, provides a practical and coherent approach where previous assessments selected solely on availability of scenarios, and may be extended for application to a range of scales and sectoral impacts.
The Tropical Upper Troposphere and Lower Stratosphere in the GEOS-2 GCM
NASA Technical Reports Server (NTRS)
Pawson, S.; Takacs, L.; Molod, A.; Nebuda, S.; Chen, M.; Rood, R.; Read, W. L.; Fiorino, M.
1999-01-01
The structure of the tropical upper troposphere and lower stratosphere in the GEOS-2 General Circulation Model (GCM) is discussed. The emphasis of this study is on the reality of monthly-mean temperature and water vapor distributions in the model, compared to reasonable observational estimates. It is shown that although the zonal-mean temperature is in good agreement with observations, the GCM supports an excessive zonal asymmetry near the tropopause compared to the ECMWF Reanalyses. In reality there is a QBO-related variability in the zonally averaged lower stratospheric temperature which is not captured by the model. The observed upper tropospheric temperature and humidity fields show variations related to those in the sea surface temperature, which are not incorporated in the GCM; nevertheless, there is some interannual variability in the GCM, indicating a component arising from internal processes. The model is too moist in the middle troposphere (500 hPa) but too dry in the upper troposphere, suggesting that there is too little vertical transport or too much drying in the GCM. Transport into the stratosphere shows a pronounced annual cycle, with drier air entering the tropical stratosphere when the tropopause is coldest in northern winter; while the alternating dry and moist air masses can be traced ascending through the tropical lower stratosphere, the progression of the anomalies is too rapid.
NASA Astrophysics Data System (ADS)
Mehrotra, Rajeshwar; Sharma, Ashish
2012-12-01
The quality of the absolute estimates of general circulation models (GCMs) calls into question the direct use of GCM outputs for climate change impact assessment studies, particularly at regional scales. Statistical correction of GCM output is often necessary when significant systematic biasesoccur between the modeled output and observations. A common procedure is to correct the GCM output by removing the systematic biases in low-order moments relative to observations or to reanalysis data at daily, monthly, or seasonal timescales. In this paper, we present an extension of a recently published nested bias correction (NBC) technique to correct for the low- as well as higher-order moments biases in the GCM-derived variables across selected multiple time-scales. The proposed recursive nested bias correction (RNBC) approach offers an improved basis for applying bias correction at multiple timescales over the original NBC procedure. The method ensures that the bias-corrected series exhibits improvements that are consistently spread over all of the timescales considered. Different variations of the approach starting from the standard NBC to the more complex recursive alternatives are tested to assess their impacts on a range of GCM-simulated atmospheric variables of interest in downscaling applications related to hydrology and water resources. Results of the study suggest that three to five iteration RNBCs are the most effective in removing distributional and persistence related biases across the timescales considered.
Regional climate model downscaling may improve the prediction of alien plant species distributions
NASA Astrophysics Data System (ADS)
Liu, Shuyan; Liang, Xin-Zhong; Gao, Wei; Stohlgren, Thomas J.
2014-12-01
Distributions of invasive species are commonly predicted with species distribution models that build upon the statistical relationships between observed species presence data and climate data. We used field observations, climate station data, and Maximum Entropy species distribution models for 13 invasive plant species in the United States, and then compared the models with inputs from a General Circulation Model (hereafter GCM-based models) and a downscaled Regional Climate Model (hereafter, RCM-based models).We also compared species distributions based on either GCM-based or RCM-based models for the present (1990-1999) to the future (2046-2055). RCM-based species distribution models replicated observed distributions remarkably better than GCM-based models for all invasive species under the current climate. This was shown for the presence locations of the species, and by using four common statistical metrics to compare modeled distributions. For two widespread invasive taxa ( Bromus tectorum or cheatgrass, and Tamarix spp. or tamarisk), GCM-based models failed miserably to reproduce observed species distributions. In contrast, RCM-based species distribution models closely matched observations. Future species distributions may be significantly affected by using GCM-based inputs. Because invasive plants species often show high resilience and low rates of local extinction, RCM-based species distribution models may perform better than GCM-based species distribution models for planning containment programs for invasive species.
NASA Astrophysics Data System (ADS)
Samaniego, Luis; Kumar, Rohini; Pechlivanidis, Illias; Breuer, Lutz; Wortmann, Michel; Vetter, Tobias; Flörke, Martina; Chamorro, Alejandro; Schäfer, David; Shah, Harsh; Zeng, Xiaofan
2016-04-01
The quantification of the predictive uncertainty in hydrologic models and their attribution to its main sources is of particular interest in climate change studies. In recent years, a number of studies have been aimed at assessing the ability of hydrologic models (HMs) to reproduce extreme hydrologic events. Disentangling the overall uncertainty of streamflow -including its derived low-flow characteristics- into individual contributions, stemming from forcings and model structure, has also been studied. Based on recent literature, it can be stated that there is a controversy with respect to which source is the largest (e.g., Teng, et al. 2012, Bosshard et al. 2013, Prudhomme et al. 2014). Very little has also been done to estimate the relative impact of the parametric uncertainty of the HMs with respect to overall uncertainty of low-flow characteristics. The ISI-MIP2 project provides a unique opportunity to understand the propagation of forcing and model structure uncertainties into century-long time series of drought characteristics. This project defines a consistent framework to deal with compatible initial conditions for the HMs and a set of standardized historical and future forcings. Moreover, the ensemble of hydrologic model predictions varies across a broad range of climate scenarios and regions. To achieve this goal, we use six preconditioned hydrologic models (HYPE or HBV, mHM, SWIM, VIC, and WaterGAP3) set up in seven large continental river basins: Amazon, Blue Nile, Ganges, Niger, Mississippi, Rhine, Yellow. These models are forced with bias-corrected outputs of five CMIP5 general circulation models (GCM) under four extreme representative concentration pathway (RCP) scenarios (i.e. 2.6, 4.5, 6.0, and 8.5 Wm-2) for the period 1971-2099. Simulated streamflow is transformed into a monthly runoff index (RI) to analyze the attribution of the GCM and HM uncertainty into drought magnitude and duration over time. Uncertainty contributions are investigated during periods: 1) 2006-2035, 2) 2036-2065 and 3) 2070-2099. Results presented in Samaniego et al. 2015 (submitted) indicate that GCM uncertainty mostly dominates over HM uncertainty for predictions of runoff drought characteristics, irrespective of the selected RCP and region. For the mHM model, in particular, GCM uncertainty always dominates over parametric uncertainty. In general, the overall uncertainty increases with time. The larger the radiative forcing of the RCP, the larger the uncertainty in drought characteristics, however, the propagation of the GCM uncertainty onto a drought characteristic depends largely upon the hydro-climatic regime. While our study emphasizes the need for multi-model ensembles for the assessment of future drought projections, the agreement between GCM forcings is still weak to draw conclusive recommendations. References: L. Samaniego, R. Kumar, I. G. Pechlivanidis, L. Breuer, M. Wortmann, T. Vetter, M. Flörke, A. Chamorro, D. Schäfer, H. Shah, X. Zeng: Propagation of forcing and model uncertainty into hydrological drought characteristics in a multi-model century-long experiment in continental river basins. Submitted to Climatic Change on Dec 2015. Bosshard, et al. 2013. doi:10.1029/2011WR011533. Prudhomme et al. 2014, doi:10.1073/pnas.1222473110. Teng, et al. 2012, doi:10.1175/JHM-D-11-058.1.
Faden, R R; Lederer, S E; Moreno, J D
1996-11-27
The Advisory Committee on Human Radiation Experiments (ACHRE), established to review allegations of abuses of human subjects in federally sponsored radiation research, was charged with identifying appropriate standards to evaluate the ethics of cold war radiation experiments. One central question for ACHRE was to determine what role, if any, the Nuremberg Code played in the norms and practices of US medical researchers. Based on the evidence from ACHRE's Ethics Oral History Project and extensive archival research, we conclude that the Code, at the time it was promulgated, had little effect on mainstream medical researchers engaged in human subjects research. Although some clinical investigators raised questions about the conduct of research involving human beings, the medical profession did not pursue this issue until the 1960s.
Space shuttle rendezous, radiation and reentry analysis code
NASA Technical Reports Server (NTRS)
Mcglathery, D. M.
1973-01-01
A preliminary space shuttle mission design and analysis tool is reported emphasizing versatility, flexibility, and user interaction through the use of a relatively small computer (IBM-7044). The Space Shuttle Rendezvous, Radiation and Reentry Analysis Code is used to perform mission and space radiation environmental analyses for four typical space shuttle missions. Included also is a version of the proposed Apollo/Soyuz rendezvous and docking test mission. Tangential steering circle to circle low-thrust tug orbit raising and the effects of the trapped radiation environment on trajectory shaping due to solar electric power losses are also features of this mission analysis code. The computational results include a parametric study on single impulse versus double impulse deorbiting for relatively low space shuttle orbits as well as some definitive data on the magnetically trapped protons and electrons encountered on a particular mission.
Sensitivity of CO2 Simulation in a GCM to the Convective Transport Algorithms
NASA Technical Reports Server (NTRS)
Zhu, Z.; Pawson, S.; Collatz, G. J.; Gregg, W. W.; Kawa, S. R.; Baker, D.; Ott, L.
2014-01-01
Convection plays an important role in the transport of heat, moisture and trace gases. In this study, we simulated CO2 concentrations with an atmospheric general circulation model (GCM). Three different convective transport algorithms were used. One is a modified Arakawa-Shubert scheme that was native to the GCM; two others used in two off-line chemical transport models (CTMs) were added to the GCM here for comparison purposes. Advanced CO2 surfaced fluxes were used for the simulations. The results were compared to a large quantity of CO2 observation data. We find that the simulation results are sensitive to the convective transport algorithms. Overall, the three simulations are quite realistic and similar to each other in the remote marine regions, but are significantly different in some land regions with strong fluxes such as Amazon and Siberia during the convection seasons. Large biases against CO2 measurements are found in these regions in the control run, which uses the original GCM. The simulation with the simple diffusive algorithm is better. The difference of the two simulations is related to the very different convective transport speed.
A Goddard Multi-Scale Modeling System with Unified Physics
NASA Technical Reports Server (NTRS)
Tao, W.K.; Anderson, D.; Atlas, R.; Chern, J.; Houser, P.; Hou, A.; Lang, S.; Lau, W.; Peters-Lidard, C.; Kakar, R.;
2008-01-01
Numerical cloud resolving models (CRMs), which are based the non-hydrostatic equations of motion, have been extensively applied to cloud-scale and mesoscale processes during the past four decades. Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that CRMs agree with observations in simulating various types of clouds and cloud systems from different geographic locations. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that Numerical Weather Prediction (NWP) and regional scale model can be run in grid size similar to cloud resolving model through nesting technique. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a szrper-parameterization or multi-scale modeling -framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and more sophisticated physical parameterization. NASA satellite and field campaign can provide initial conditions as well as validation through utilizing the Earth Satellite simulators. At Goddard, we have developed a multi-scale modeling system with unified physics. The modeling system consists a coupled GCM-CRM (or MMF); a state-of-the-art weather research forecast model (WRF) and a cloud-resolving model (Goddard Cumulus Ensemble model). In these models, the same microphysical schemes (2ICE, several 3ICE), radiation (including explicitly calculated cloud optical properties), and surface models are applied. In addition, a comprehensive unified Earth Satellite simulator has been developed at GSFC, which is designed to fully utilize the multi-scale modeling system. A brief review of the multi-scale modeling system with unified physics/simulator and examples is presented in this article.
Computer aided radiation analysis for manned spacecraft
NASA Technical Reports Server (NTRS)
Appleby, Matthew H.; Griffin, Brand N.; Tanner, Ernest R., II; Pogue, William R.; Golightly, Michael J.
1991-01-01
In order to assist in the design of radiation shielding an analytical tool is presented that can be employed in combination with CAD facilities and NASA transport codes. The nature of radiation in space is described, and the operational requirements for protection are listed as background information for the use of the technique. The method is based on the Boeing radiation exposure model (BREM) for combining NASA radiation transport codes and CAD facilities, and the output is given as contour maps of the radiation-shield distribution so that dangerous areas can be identified. Computational models are used to solve the 1D Boltzmann transport equation and determine the shielding needs for the worst-case scenario. BREM can be employed directly with the radiation computations to assess radiation protection during all phases of design which saves time and ultimately spacecraft weight.
NASA Technical Reports Server (NTRS)
Ferrare, R. A.; Whiteman, D. N.; Melfi, S. H.; Evans, K. D.; Holben, B. N.
1995-01-01
The first Atmospheric Radiation Measurement (ARM) Remote Cloud Study (RCS) Intensive Operations Period (IOP) was held during April 1994 at the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site near Lamont, Oklahoma. This experiment was conducted to evaluate and calibrate state-of-the-art, ground based remote sensing instruments and to use the data acquired by these instruments to validate retrieval algorithms developed under the ARM program. These activities are part of an overall plan to assess general circulation model (GCM) parameterization research. Since radiation processes are one of the key areas included in this parameterization research, measurements of water vapor and aerosols are required because of the important roles these atmospheric constituents play in radiative transfer. Two instruments were deployed during this IOP to measure water vapor and aerosols and study their relationship. The NASA/Goddard Space Flight Center (GSFC) Scanning Raman Lidar (SRL) acquired water vapor and aerosol profile data during 15 nights of operations. The lidar acquired vertical profiles as well as nearly horizontal profiles directed near an instrumented 60 meter tower. Aerosol optical thickness, phase function, size distribution, and integrated water vapor were derived from measurements with a multiband automatic sun and sky scanning radiometer deployed at this site.
The effects of cloud radiative forcing on an ocean-covered planet
NASA Technical Reports Server (NTRS)
Randall, David A.
1990-01-01
Cumulus anvil clouds, whose importance has been emphasized by observationalists in recent years, exert a very powerful influence on deep tropical convection by tending to radiatively destabilize the troposphere. In addition, they radiatively warm the column in which they reside. Their strong influence on the simulated climate argues for a much more refined parameterization in the General Circulation Model (GCM). For Seaworld, the atmospheric cloud radiative forcing (ACRF) has a powerful influence on such basic climate parameters as the strength of the Hadley circulation, the existence of a single narrow InterTropical Convergence Zone (ITCZ), and the precipitable water content of the atmosphere. It seems likely, however, that in the real world the surface CRF feeds back negatively to suppress moist convection and the associated cloudiness, and so tends to counteract the effects of the ACRF. Many current climate models have fixed sea surface temperatures but variable land-surface temperatures. The tropical circulations of such models may experience a position feedback due to ACRF over the oceans, and a negative or weak feedback due to surface CRF over the land. The overall effects of the CRF on the climate system can only be firmly established through much further analysis, which can benefit greatly from the use of a coupled ocean-atmospheric model.
Fuego/Scefire MPMD Coupling L2 Milestone Executive Summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, Flint; Tencer, John; Pautz, Shawn D.
2017-09-01
This milestone campaign was focused on coupling Sandia physics codes SIERRA low Mach module Fuego and RAMSES Boltzmann transport code Sceptre(Scefire). Fuego enables simulation of low Mach, turbulent, reacting, particle laden flows on unstructured meshes using CVFEM for abnormal thermal environments throughout SNL and the larger national security community. Sceptre provides simulation for photon, neutron, and charged particle transport on unstructured meshes using Discontinuous Galerkin for radiation effects calculations at SNL and elsewhere. Coupling these ”best of breed” codes enables efficient modeling of thermal/fluid environments with radiation transport, including fires (pool, propellant, composite) as well as those with directed radiantmore » fluxes. We seek to improve the experience of Fuego users who require radiation transport capabilities in two ways. The first is performance. We achieve this through leveraging additional computational resources for Scefire, reducing calculation times while leaving unaffected resources for fluid physics. This approach is new to Fuego, which previously utilized the same resources for both fluid and radiation solutions. The second improvement enables new radiation capabilities, including spectral (banded) radiation, beam boundary sources, and alternate radiation solvers (i.e. Pn). This summary provides an overview of these achievements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flacco, A.; Fairchild, M.; Reiche, S.
2004-12-07
The coherent radiation emitted by electrons in high brightness beam-based experiments is important from the viewpoints of both radiation source development, and the understanding and diagnosing the basic physical processes important in beam manipulations at high intensity. While much theoretical work has been developed to aid in calculating aspects of this class of radiation, these methods do not often produce accurate information concerning the experimentally relevant aspects of the radiation. At UCLA, we are particularly interested in coherent synchrotron radiation and the related phenomena of coherent edge radiation, in the context of a fs-beam chicane compression experiment at the BNLmore » ATF. To analyze this and related problems, we have developed a program that acts as an extension to the Lienard-Wiechert-based 3D simulation code TREDI, termed FieldEye. This program allows the evaluation of electromagnetic fields in the time and frequency domain in an arbitrary 2D detector planar area. We discuss here the implementation of the FieldEye code, and give examples of results relevant to the case of the ATF chicane compressor experiment.« less
NASA Technical Reports Server (NTRS)
Gronoff, Guillaume; Norman, Ryan B.; Mertens, Christopher J.
2014-01-01
The ability to evaluate the cosmic ray environment at Mars is of interest for future manned exploration. To support exploration, tools must be developed to accurately access the radiation environment in both free space and on planetary surfaces. The primary tool NASA uses to quantify radiation exposure behind shielding materials is the space radiation transport code, HZETRN. In order to build confidence in HZETRN, code benchmarking against Monte Carlo radiation transport codes is often used. This work compares the dose calculations at Mars by HZETRN and the Geant4 application Planetocosmics. The dose at ground and the energy deposited in the atmosphere by galactic cosmic ray protons and alpha particles has been calculated for the Curiosity landing conditions. In addition, this work has considered Solar Energetic Particle events, allowing for the comparison of varying input radiation environments. The results for protons and alpha particles show very good agreement between HZETRN and Planetocosmics.
Convection and thermal radiation analytical models applicable to a nuclear waste repository room
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, B.W.
1979-01-17
Time-dependent temperature distributions in a deep geologic nuclear waste repository have a direct impact on the physical integrity of the emplaced canisters and on the design of retrievability options. This report (1) identifies the thermodynamic properties and physical parameters of three convection regimes - forced, natural, and mixed; (2) defines the convection correlations applicable to calculating heat flow in a ventilated (forced-air) and in a nonventilated nuclear waste repository room; and (3) delineates a computer code that (a) computes and compares the floor-to-ceiling heat flow by convection and radiation, and (b) determines the nonlinear equivalent conductivity table for a repositorymore » room. (The tables permit the use of the ADINAT code to model surface-to-surface radiation and the TRUMP code to employ two different emissivity properties when modeling radiation exchange between the surface of two different materials.) The analysis shows that thermal radiation dominates heat flow modes in a nuclear waste repository room.« less
Forward Monte Carlo Computations of Polarized Microwave Radiation
NASA Technical Reports Server (NTRS)
Battaglia, A.; Kummerow, C.
2000-01-01
Microwave radiative transfer computations continue to acquire greater importance as the emphasis in remote sensing shifts towards the understanding of microphysical properties of clouds and with these to better understand the non linear relation between rainfall rates and satellite-observed radiance. A first step toward realistic radiative simulations has been the introduction of techniques capable of treating 3-dimensional geometry being generated by ever more sophisticated cloud resolving models. To date, a series of numerical codes have been developed to treat spherical and randomly oriented axisymmetric particles. Backward and backward-forward Monte Carlo methods are, indeed, efficient in this field. These methods, however, cannot deal properly with oriented particles, which seem to play an important role in polarization signatures over stratiform precipitation. Moreover, beyond the polarization channel, the next generation of fully polarimetric radiometers challenges us to better understand the behavior of the last two Stokes parameters as well. In order to solve the vector radiative transfer equation, one-dimensional numerical models have been developed, These codes, unfortunately, consider the atmosphere as horizontally homogeneous with horizontally infinite plane parallel layers. The next development step for microwave radiative transfer codes must be fully polarized 3-D methods. Recently a 3-D polarized radiative transfer model based on the discrete ordinate method was presented. A forward MC code was developed that treats oriented nonspherical hydrometeors, but only for plane-parallel situations.
Stationary radiation hydrodynamics of accreting magnetic white dwarfs.
NASA Astrophysics Data System (ADS)
Woelk, U.; Beuermann, K.
1996-02-01
Using an artificial viscosity, we solved the one-dimensional time-independent two-fluid hydrodynamic equations simultaneously to the fully frequency and angle dependent radiation transport in an accretion flow directed towards the surface of a magnetic white dwarf. We consider energy transfer from ions to electrons by Coulomb encounters and cooling by bremsstrahlung and by cyclotron radiation in fields between B=5 and 70MG. Electron and ion temperatures relax in the post-shock regime and the cooling flow settles onto the white dwarf surface. For high mass flow rates ˙(m) (in g/cm^2^/s), cooling takes place mainly by bremsstrahlung and the solutions approach the non-magnetic case. For low ˙(m) and high B, cooling is dominated by cyclotron radiation which causes the thickness of the cooling region to collapse by 1-2 orders of magnitude compared to the non-magnetic case. The electron temperature behind the shock drops from a few 10^8^ to a few 10^7^K and the ratio of cyclotron vs. total radiative flux approaches unity. For high ˙(m) and low B values, bremsstrahlung dominates, but cyclotron losses can never be neglected. We find a smooth transition from particle-heated to shock-heated atmospheres in the maximum electron temperature and also in the thickness of the heated layer. With these results, the stationary radiation-hydrodynamics of accreting magnetic white dwarfs with cyclotron and bremsstrahlung cooling has been solved for the whole range of observed mass flow rates and field strengths.
Shortwave and longwave radiative contributions to global warming under increasing CO2
Donohoe, Aaron; Armour, Kyle C.; Pendergrass, Angeline G.; Battisti, David S.
2014-01-01
In response to increasing concentrations of atmospheric CO2, high-end general circulation models (GCMs) simulate an accumulation of energy at the top of the atmosphere not through a reduction in outgoing longwave radiation (OLR)—as one might expect from greenhouse gas forcing—but through an enhancement of net absorbed solar radiation (ASR). A simple linear radiative feedback framework is used to explain this counterintuitive behavior. It is found that the timescale over which OLR returns to its initial value after a CO2 perturbation depends sensitively on the magnitude of shortwave (SW) feedbacks. If SW feedbacks are sufficiently positive, OLR recovers within merely several decades, and any subsequent global energy accumulation is because of enhanced ASR only. In the GCM mean, this OLR recovery timescale is only 20 y because of robust SW water vapor and surface albedo feedbacks. However, a large spread in the net SW feedback across models (because of clouds) produces a range of OLR responses; in those few models with a weak SW feedback, OLR takes centuries to recover, and energy accumulation is dominated by reduced OLR. Observational constraints of radiative feedbacks—from satellite radiation and surface temperature data—suggest an OLR recovery timescale of decades or less, consistent with the majority of GCMs. Altogether, these results suggest that, although greenhouse gas forcing predominantly acts to reduce OLR, the resulting global warming is likely caused by enhanced ASR. PMID:25385628
2006-09-30
disturbances from the lower atmosphere and ocean affect the upper atmosphere and how this variability interacts with the variability generated by solar and...represents “ general circulation model.” Both models include self-consistent ionospheric electrodynamics, that is, a calculation of the electric fields...and currents generated by the ionospheric dynamo, and consideration of their effects on the neutral dynamics. The TIE-GCM is used for studies that
Computer model of catalytic combustion/Stirling engine heater head
NASA Technical Reports Server (NTRS)
Chu, E. K.; Chang, R. L.; Tong, H.
1981-01-01
The basic Acurex HET code was modified to analyze specific problems for Stirling engine heater head applications. Specifically, the code can model: an adiabatic catalytic monolith reactor, an externally cooled catalytic cylindrical reactor/flat plate reactor, a coannular tube radiatively cooled reactor, and a monolithic reactor radiating to upstream and downstream heat exchangers.
NASA Technical Reports Server (NTRS)
Mashnik, S. G.; Gudima, K. K.; Sierk, A. J.; Moskalenko, I. V.
2002-01-01
Space radiation shield applications and studies of cosmic ray propagation in the Galaxy require reliable cross sections to calculate spectra of secondary particles and yields of the isotopes produced in nuclear reactions induced both by particles and nuclei at energies from threshold to hundreds of GeV per nucleon. Since the data often exist in a very limited energy range or sometimes not at all, the only way to obtain an estimate of the production cross sections is to use theoretical models and codes. Recently, we have developed improved versions of the Cascade-Exciton Model (CEM) of nuclear reactions: the codes CEM97 and CEM2k for description of particle-nucleus reactions at energies up to about 5 GeV. In addition, we have developed a LANL version of the Quark-Gluon String Model (LAQGSM) to describe reactions induced both by particles and nuclei at energies up to hundreds of GeVhucleon. We have tested and benchmarked the CEM and LAQGSM codes against a large variety of experimental data and have compared their results with predictions by other currently available models and codes. Our benchmarks show that CEM and LAQGSM codes have predictive powers no worse than other currently used codes and describe many reactions better than other codes; therefore both our codes can be used as reliable event-generators for space radiation shield and cosmic ray propagation applications. The CEM2k code is being incorporated into the transport code MCNPX (and several other transport codes), and we plan to incorporate LAQGSM into MCNPX in the near future. Here, we present the current status of the CEM2k and LAQGSM codes, and show results and applications to studies of cosmic ray propagation in the Galaxy.
Radiation in Space and Its Control of Equilibrium Temperatures in the Solar System
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2004-01-01
The problem of determining equilibrium temperatures for reradiating surfaces in space vacuum was analyzed and the resulting mathematical relationships were incorporated in a code to determine space sink temperatures in the solar system. A brief treatment of planetary atmospheres is also included. Temperature values obtained with the code are in good agreement with available spacecraft telemetry and meteorological measurements for Venus and Earth. The code has been used in the design of space power system radiators for future interplanetary missions.
Grote, Stefan; Noeldeke, Tatjana; Blauth, Michael; Mutschler, Wolf; Bürklein, Dominik
2013-06-07
Knowledge of local bone quality is essential for surgeons to determine operation techniques. A device for intraoperative measurement of local bone quality has been developed by the AO-Research Foundation (Densi - Probe®). We used this device to experimentally measure peak breakaway torque of trabecular bone in the proximal femur and correlated this with local bone mineral density (BMD) and failure load. Bone mineral density of 160 cadaver femurs was measured by ex situ dualenergy X-ray absorptiometry. The failure load of all femurs was analyzed by side-impact analysis. Femur fractures were fixed and mechanical peak torque was measured with the DensiProbe® device. Correlation was calculated whereas correlation coefficient and significance was calculated by Fisher's Ztransformation. Moreover, linear regression analysis was carried out. The unpaired Student's t-test was used to assess the significance of differences. The Ward triangle region had the lowest BMD with 0.511 g/cm(2) (±0.17 g/cm(2)), followed by the upper neck region with 0.546 g/cm(2) (±0.16 g/cm(2)), trochanteric region with 0.685 g/cm(2) (±0.19 g/cm(2)) and the femoral neck with 0.813 g/cm(2) (±0.2 g/cm(2)). Peak torque of DensiProbe® in the femoral head was 3.48 Nm (±2.34 Nm). Load to failure was 4050.2 N (±1586.7 N). The highest correlation of peak torque measured by Densi Probe® and load to failure was found in the femoral neck (r=0.64, P<0.001). The overall correlation of mechanical peak torque with T-score was r=0.60 (P<0.001). A correlation was found between mechanical peak torque, load to failure of bone and BMD in vitro. Trabecular strength of bone and bone mineral density are different aspects of bone strength, but a correlation was found between them. Mechanical peak torque as measured may contribute additional information about bone strength, especially in the perioperative testing.
NASA Astrophysics Data System (ADS)
De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.
2013-02-01
We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.
Modeling Laser-Driven Laboratory Astrophysics Experiments Using the CRASH Code
NASA Astrophysics Data System (ADS)
Grosskopf, Michael; Keiter, P.; Kuranz, C. C.; Malamud, G.; Trantham, M.; Drake, R.
2013-06-01
Laser-driven, laboratory astrophysics experiments can provide important insight into the physical processes relevant to astrophysical systems. The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density laboratory astrophysics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. The CRASH model has been used on many applications including: radiative shocks, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL) collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.
DOT National Transportation Integrated Search
1996-12-31
The GCM Corridor is one of four "Priority Corridors" throughout the country. These corridors have been selected for special federal transportation funding based on very specific transportation and environmental criteria. The corridor includes the gre...
Use of Existing CAD Models for Radiation Shielding Analysis
NASA Technical Reports Server (NTRS)
Lee, K. T.; Barzilla, J. E.; Wilson, P.; Davis, A.; Zachman, J.
2015-01-01
The utility of a radiation exposure analysis depends not only on the accuracy of the underlying particle transport code, but also on the accuracy of the geometric representations of both the vehicle used as radiation shielding mass and the phantom representation of the human form. The current NASA/Space Radiation Analysis Group (SRAG) process to determine crew radiation exposure in a vehicle design incorporates both output from an analytic High Z and Energy Particle Transport (HZETRN) code and the properties (i.e., material thicknesses) of a previously processed drawing. This geometry pre-process can be time-consuming, and the results are less accurate than those determined using a Monte Carlo-based particle transport code. The current work aims to improve this process. Although several Monte Carlo programs (FLUKA, Geant4) are readily available, most use an internal geometry engine. The lack of an interface with the standard CAD formats used by the vehicle designers limits the ability of the user to communicate complex geometries. Translation of native CAD drawings into a format readable by these transport programs is time consuming and prone to error. The Direct Accelerated Geometry -United (DAGU) project is intended to provide an interface between the native vehicle or phantom CAD geometry and multiple particle transport codes to minimize problem setup, computing time and analysis error.
A fast code for channel limb radiances with gas absorption and scattering in a spherical atmosphere
NASA Astrophysics Data System (ADS)
Eluszkiewicz, Janusz; Uymin, Gennady; Flittner, David; Cady-Pereira, Karen; Mlawer, Eli; Henderson, John; Moncet, Jean-Luc; Nehrkorn, Thomas; Wolff, Michael
2017-05-01
We present a radiative transfer code capable of accurately and rapidly computing channel limb radiances in the presence of gaseous absorption and scattering in a spherical atmosphere. The code has been prototyped for the Mars Climate Sounder measuring limb radiances in the thermal part of the spectrum (200-900 cm-1) where absorption by carbon dioxide and water vapor and absorption and scattering by dust and water ice particles are important. The code relies on three main components: 1) The Gauss Seidel Spherical Radiative Transfer Model (GSSRTM) for scattering, 2) The Planetary Line-By-Line Radiative Transfer Model (P-LBLRTM) for gas opacity, and 3) The Optimal Spectral Sampling (OSS) for selecting a limited number of spectral points to simulate channel radiances and thus achieving a substantial increase in speed. The accuracy of the code has been evaluated against brute-force line-by-line calculations performed on the NASA Pleiades supercomputer, with satisfactory results. Additional improvements in both accuracy and speed are attainable through incremental changes to the basic approach presented in this paper, which would further support the use of this code for real-time retrievals and data assimilation. Both newly developed codes, GSSRTM/OSS for MCS and P-LBLRTM, are available for additional testing and user feedback.
Novel paint design based on nanopowder to protection against X and gamma rays
Movahedi, Mohammad Mehdi; Abdi, Adibe; Mehdizadeh, Alireza; Dehghan, Naser; Heidari, Emad; Masumi, Yusef; Abbaszadeh, Mojtaba
2014-01-01
Background: Lead-based shields are the standard method of intraoperative radiation protection in the radiology and nuclear medicine department. Human lead toxicity is well documented. The lead used is heavy, lacks durability, is difficult to launder, and its disposal is associated with environmental hazards. The aim of this study was to design a lead free paint for protection against X and gamma rays. Materials and Methods: In this pilot st we evaluated several types of nano metal powder that seemed to have good absorption. The Monte Carlo code, MCNP4C, was used to model the attenuation of X-ray photons in paints with different designs. Experimental measurements were carried out to assess the attenuation properties of each paint design. Results: Among the different nano metal powder, nano tungsten trioxide and nano tin dioxide were the two most appropriate candidates for making paint in diagnostic photon energy range. Nano tungsten trioxide (15%) and nano tin dioxide (85%) provided the best protection in both simulation and experiments. After this step, attempts were made to produce appropriate nano tungsten trioxide-nano tin dioxide paints. The density of this nano tungsten trioxide-nano tin dioxide paint was 4.2 g/cm3. The MCNP simulation and experimental measurements for HVL (Half-Value Layer) values of this shield at 100 kVp were 0.25 and 0.23 mm, respectively. Conclusions: The results showed the cost-effective lead-free paint can be a great power in absorbing the X-rays and gamma rays and it can be used instead of lead. PMID:24591777
3D Cloud Field Prediction using A-Train Data and Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Johnson, C. L.
2017-12-01
Validation of cloud process parameterizations used in global climate models (GCMs) would greatly benefit from observed 3D cloud fields at the size comparable to that of a GCM grid cell. For the highest resolution simulations, surface grid cells are on the order of 100 km by 100 km. CloudSat/CALIPSO data provides 1 km width of detailed vertical cloud fraction profile (CFP) and liquid and ice water content (LWC/IWC). This work utilizes four machine learning algorithms to create nonlinear regressions of CFP, LWC, and IWC data using radiances, surface type and location of measurement as predictors and applies the regression equations to off-track locations generating 3D cloud fields for 100 km by 100 km domains. The CERES-CloudSat-CALIPSO-MODIS (C3M) merged data set for February 2007 is used. Support Vector Machines, Artificial Neural Networks, Gaussian Processes and Decision Trees are trained on 1000 km of continuous C3M data. Accuracy is computed using existing vertical profiles that are excluded from the training data and occur within 100 km of the training data. Accuracy of the four algorithms is compared. Average accuracy for one day of predicted data is 86% for the most successful algorithm. The methodology for training the algorithms, determining valid prediction regions and applying the equations off-track is discussed. Predicted 3D cloud fields are provided as inputs to the Ed4 NASA LaRC Fu-Liou radiative transfer code and resulting TOA radiances compared to observed CERES/MODIS radiances. Differences in computed radiances using predicted profiles and observed radiances are compared.
Time Resolved Phonon Spectroscopy, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goett, Johnny; Zhu, Brian
TRPS code was developed for the project "Time Resolved Phonon Spectroscopy". Routines contained in this piece of software were specially created to model phonon generation and tracking within materials that interact with ionizing radiation, particularly applicable to the modeling of cryogenic radiation detectors for dark matter and neutrino research. These routines were created to link seamlessly with the open source Geant4 framework for the modeling of radiation transport in matter, with the explicit intent of open sourcing them for eventual integration into that code base.
Parameterization of turbulence and the planetary boundary layer in the GLA Fourth Order GCM
NASA Technical Reports Server (NTRS)
Helfand, H. M.
1985-01-01
A new scheme has been developed to model the planetary boundary layer in the GLAS Fourth Order GCM through explicit resolution of its vertical structure into two or more vertical layers. This involves packing the lowest layers of the GCM close to the ground and developing new parameterization schemes that can express the turbulent vertical fluxes of heat, momentum and moisture at the earth's surface and between the layers that are contained with the PBL region. Offline experiments indicate that the combination of the modified level 2.5 second-order turbulent closure scheme and the 'extended surface layer' similarity scheme should work well to simulate the behavior of the turbulent PBL even at the coarsest vertical resolution with which such schemes will conceivably be used in the GLA Fourth Order GCM.
Lee, R.W.
1997-01-01
The research site at Otis Air Base, Cape Cod, Massachusetts, has been developed for hydrogeological and geochemical studies of sewage-effluent contaminated groundwater since 1982. Research of hydrologic properties, transport, and chemical and biological processes is ongoing, but the origin of background water chemistry has not been determined. The principal geochemical process giving rise to the observed background water chemistry is CO2-controlled hydrolysis of Na feldspar. Geochemical modeling demonstrated that CO2 sources could vary over the project area. Analyses of unsaturated zone gases showed variations in CO2 which were dependent on land use and vegetative cover in the area of groundwater recharge. Measurements of CO2 in unsaturated-zone gases showed that concentrations of total inorganic C in recharge water should range from about 0.035 to 1.0 mmoles/L in the vicinity of Otis Air Base. Flux of CO2 from the unsaturated zone varied for a principal land uses, ranging from 86 gC/m2/yr for low vegetated areas to 1630 gC/m2/yr for a golf course. Carbon dioxide flux from woodlands was 220 gC/m2/yr, lower than reported fluxes of 500 to 600 gC/m2/yr for woodlands in a similar climate. Carbon dioxide flux from grassy areas was 540 gC/m2/yr, higher than reported fluxes of 230 to 490 gC/m2/yr for grasslands in a similar climate.
Analysis of the anomalous mean-field like properties of Gaussian core model in terms of entropy
NASA Astrophysics Data System (ADS)
Nandi, Manoj Kumar; Maitra Bhattacharyya, Sarika
2018-01-01
Studies of the Gaussian core model (GCM) have shown that it behaves like a mean-field model and the properties are quite different from standard glass former. In this work, we investigate the entropies, namely, the excess entropy (Sex) and the configurational entropy (Sc) and their different components to address these anomalies. Our study corroborates most of the earlier observations and also sheds new light on the high and low temperature dynamics. We find that unlike in standard glass former where high temperature dynamics is dominated by two-body correlation and low temperature by many-body correlations, in the GCM both high and low temperature dynamics are dominated by many-body correlations. We also find that the many-body entropy which is usually positive at low temperatures and is associated with activated dynamics is negative in the GCM suggesting suppression of activation. Interestingly despite the suppression of activation, the Adam-Gibbs (AG) relation that describes activated dynamics holds in the GCM, thus suggesting a non-activated contribution in AG relation. We also find an overlap between the AG relation and mode coupling power law regime leading to a power law behavior of Sc. From our analysis of this power law behavior, we predict that in the GCM the high temperature dynamics will disappear at dynamical transition temperature and below that there will be a transition to the activated regime. Our study further reveals that the activated regime in the GCM is quite narrow.
Patel, Purvi SD; Shepherd, Duncan ET; Hukins, David WL
2008-01-01
Background Polyurethane (PU) foam is widely used as a model for cancellous bone. The higher density foams are used as standard biomechanical test materials, but none of the low density PU foams are universally accepted as models for osteoporotic (OP) bone. The aim of this study was to determine whether low density PU foam might be suitable for mimicking human OP cancellous bone. Methods Quasi-static compression tests were performed on PU foam cylinders of different lengths (3.9 and 7.7 mm) and of different densities (0.09, 0.16 and 0.32 g.cm-3), to determine the Young's modulus, yield strength and energy absorbed to yield. Results Young's modulus values were 0.08–0.93 MPa for the 0.09 g.cm-3 foam and from 15.1–151.4 MPa for the 0.16 and 0.32 g.cm-3 foam. Yield strength values were 0.01–0.07 MPa for the 0.09 g.cm-3 foam and from 0.9–4.5 MPa for the 0.16 and 0.32 g.cm-3 foam. The energy absorbed to yield was found to be negligible for all foam cylinders. Conclusion Based on these results, it is concluded that 0.16 g.cm-3 PU foam may prove to be suitable as an OP cancellous bone model when fracture stress, but not energy dissipation, is of concern. PMID:18844988
Warming ancient Mars with water clouds
NASA Astrophysics Data System (ADS)
Hartwick, V.; Toon, B.
2017-12-01
High clouds in the present day Mars atmosphere nucleate on interplanetary dust particles (IDPs) that burn up on entry into the Mars atmosphere. Clouds form when superstaturated water vapor condenses on suspended aerosols. Radiatively active water ice clouds may play a crucial role in warming the early Mars climate. Urata and Toon (2011) simulate a stable warm paleo-climate for Mars if clouds form high in the atmosphere and if particles are sufficiently large (r > 10 μm). The annual fluence of micrometeoroids at Mars was larger early on in the evolution of our solar system. Additionally, the water vapor budget throughout the middle and high atmosphere was likely heightened . Both factors should contribute to enhanced nucleation and growth of water ice cloud particles at high altitudes. Here, we use the MarsCAM-CARMA general circulation model (GCM) to examine the radiative impact of high altitude water ice clouds on the early Mars climate and as a possible solution to the faint young sun problem for Mars.
Black carbon aerosol-induced Northern Hemisphere tropical expansion
Kovilakam, Mahesh; Mahajan, Salil
2015-06-23
Global climate models (GCMs) underestimate the observed trend in tropical expansion. Recent studies partly attribute it to black carbon (BC) aerosols, which are poorly represented in GCMs. In this paper, we conduct a suite of idealized experiments with the Community Atmosphere Model version 4 coupled to a slab ocean model forced with increasing BC concentrations covering a large swath of the estimated range of current BC radiative forcing while maintaining their spatial distribution. The Northern Hemisphere (NH) tropics expand poleward nearly linearly as BC radiative forcing increases (0.7° W -1 m 2), indicating that a realistic representation of BC couldmore » reduce GCM biases. We find support for the mechanism where BC-induced midlatitude tropospheric heating shifts the maximum meridional tropospheric temperature gradient poleward resulting in tropical expansion. Finally, we also find that the NH poleward tropical edge is nearly linearly correlated with the location of the Intertropical Convergence Zone, which shifts northward in response to increasing BC.« less
Atlas of Seasonal Means Simulated by the NSIPP 1 Atmospheric GCM. Volume 17
NASA Technical Reports Server (NTRS)
Suarez, Max J. (Editor); Bacmeister, Julio; Pegion, Philip J.; Schubert, Siegfried D.; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
This atlas documents the climate characteristics of version 1 of the NASA Seasonal-to-Interannual Prediction Project (NSIPP) Atmospheric General Circulation Model (AGCM). The AGCM includes an interactive land model (the Mosaic scheme), and is part of the NSIPP coupled atmosphere-land-ocean model. The results presented here are based on a 20-year (December 1979-November 1999) "ANIIP-style" integration of the AGCM in which the monthly-mean sea-surface temperature and sea ice are specified from observations. The climate characteristics of the AGCM are compared with the National Centers for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasting (ECMWF) reanalyses. Other verification data include Special Sensor Microwave/Imager (SSNM) total precipitable water, the Xie-Arkin estimates of precipitation, and Earth Radiation Budget Experiment (ERBE) measurements of short and long wave radiation. The atlas is organized by season. The basic quantities include seasonal mean global maps and zonal and vertical averages of circulation, variance/covariance statistics, and selected physics quantities.
RAPTOR. I. Time-dependent radiative transfer in arbitrary spacetimes
NASA Astrophysics Data System (ADS)
Bronzwaer, T.; Davelaar, J.; Younsi, Z.; Mościbrodzka, M.; Falcke, H.; Kramer, M.; Rezzolla, L.
2018-05-01
Context. Observational efforts to image the immediate environment of a black hole at the scale of the event horizon benefit from the development of efficient imaging codes that are capable of producing synthetic data, which may be compared with observational data. Aims: We aim to present RAPTOR, a new public code that produces accurate images, animations, and spectra of relativistic plasmas in strong gravity by numerically integrating the equations of motion of light rays and performing time-dependent radiative transfer calculations along the rays. The code is compatible with any analytical or numerical spacetime. It is hardware-agnostic and may be compiled and run both on GPUs and CPUs. Methods: We describe the algorithms used in RAPTOR and test the code's performance. We have performed a detailed comparison of RAPTOR output with that of other radiative-transfer codes and demonstrate convergence of the results. We then applied RAPTOR to study accretion models of supermassive black holes, performing time-dependent radiative transfer through general relativistic magneto-hydrodynamical (GRMHD) simulations and investigating the expected observational differences between the so-called fast-light and slow-light paradigms. Results: Using RAPTOR to produce synthetic images and light curves of a GRMHD model of an accreting black hole, we find that the relative difference between fast-light and slow-light light curves is less than 5%. Using two distinct radiative-transfer codes to process the same data, we find integrated flux densities with a relative difference less than 0.01%. Conclusions: For two-dimensional GRMHD models, such as those examined in this paper, the fast-light approximation suffices as long as errors of a few percent are acceptable. The convergence of the results of two different codes demonstrates that they are, at a minimum, consistent. The public version of RAPTOR is available at the following URL: http://https://github.com/tbronzwaer/raptor
An approach for assessing the sensitivity of floods to regional climate change
NASA Astrophysics Data System (ADS)
Hughes, James P.; Lettenmaier, Dennis P.; Wood, Eric F.
1992-06-01
A high visibility afforded climate change issues is recent years has led to conflicts between and among decision makers and scientists. Decision makers inevitably feel pressure to assess the effect of climate change on the public welfare, while most climate modelers are, to a greater or lesser degree, concerned about the extent to which known inaccuracies in their models limit or preclude the use of modeling results for policy making. The water resources sector affords a good example of the limitations of the use of alternative climate scenarios derived from GCMs for decision making. GCM simulations of precipitation agree poorly between GCMs, and GCM predictions of runoff and evapotranspiration are even more uncertain. Further, water resources managers must be concerned about hydrologic extremes (floods and droughts) which are much more difficult to predict than ``average'' conditions. Most studies of the sensitivity of water resource systems and operating policies to climate change to data have been based on simple perturbations of historic hydroclimatological time series to reflect the difference between large area GCM simulations for an altered climate (e.g., CO2 doubling) and a GCM simulation of present climate. Such approaches are especially limited for assessment of the sensitivity of water resources systems under extreme conditions, conditions, since the distribution of storm inter-arrival times, for instance, is kept identical to that observed in the historic past. Further, such approaches have generally been based on the difference between the GCM altered and present climates for a single grid cell, primarily because the GCM spatial scale is often much larger than the scale at which climate interpretations are desired. The use of single grid cell GCM results is considered inadvisable by many GCM modelers, who feel the spatial scale for which interpretation of GCM results is most reasonable is on the order of several grid cells. In this paper, we demonstrate an alternative approach to assessing the implications of altered climates as predicted by GCMs for extreme (flooding) conditions. The approach is based on the characterization of regional atmospheric circulation patterns through a weather typing procedure, from which a stochastic model of the weather class occurrences is formulated. Weather types are identified through a CART (Classification and Regression Tree) approach. Precipitation occurence/non-occurence at multiple precipitation station is then predicted through a second stage stochastic model. Precipitation amounts are predicted conditional on the weather class identified from the large area circulation information.
Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations
NASA Astrophysics Data System (ADS)
Tritsis, A.; Yorke, H.; Tassis, K.
2018-05-01
We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.
Counterpropagating Radiative Shock Experiments on the Orion Laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suzuki-Vidal, F.; Clayson, T.; Stehlé, C.
We present new experiments to study the formation of radiative shocks and the interaction between two counterpropagating radiative shocks. The experiments are performed at the Orion laser facility, which is used to drive shocks in xenon inside large aspect ratio gas cells. The collision between the two shocks and their respective radiative precursors, combined with the formation of inherently three-dimensional shocks, provides a novel platform particularly suited for the benchmarking of numerical codes. The dynamics of the shocks before and after the collision are investigated using point-projection x-ray backlighting while, simultaneously, the electron density in the radiative precursor was measuredmore » via optical laser interferometry. Modeling of the experiments using the 2D radiation hydrodynamic codes nym and petra shows very good agreement with the experimental results.« less
Counterpropagating Radiative Shock Experiments on the Orion Laser.
Suzuki-Vidal, F; Clayson, T; Stehlé, C; Swadling, G F; Foster, J M; Skidmore, J; Graham, P; Burdiak, G C; Lebedev, S V; Chaulagain, U; Singh, R L; Gumbrell, E T; Patankar, S; Spindloe, C; Larour, J; Kozlova, M; Rodriguez, R; Gil, J M; Espinosa, G; Velarde, P; Danson, C
2017-08-04
We present new experiments to study the formation of radiative shocks and the interaction between two counterpropagating radiative shocks. The experiments are performed at the Orion laser facility, which is used to drive shocks in xenon inside large aspect ratio gas cells. The collision between the two shocks and their respective radiative precursors, combined with the formation of inherently three-dimensional shocks, provides a novel platform particularly suited for the benchmarking of numerical codes. The dynamics of the shocks before and after the collision are investigated using point-projection x-ray backlighting while, simultaneously, the electron density in the radiative precursor was measured via optical laser interferometry. Modeling of the experiments using the 2D radiation hydrodynamic codes nym and petra shows very good agreement with the experimental results.
Counterpropagating Radiative Shock Experiments on the Orion Laser
Suzuki-Vidal, F.; Clayson, T.; Stehlé, C.; ...
2017-08-02
We present new experiments to study the formation of radiative shocks and the interaction between two counterpropagating radiative shocks. The experiments are performed at the Orion laser facility, which is used to drive shocks in xenon inside large aspect ratio gas cells. The collision between the two shocks and their respective radiative precursors, combined with the formation of inherently three-dimensional shocks, provides a novel platform particularly suited for the benchmarking of numerical codes. The dynamics of the shocks before and after the collision are investigated using point-projection x-ray backlighting while, simultaneously, the electron density in the radiative precursor was measuredmore » via optical laser interferometry. Modeling of the experiments using the 2D radiation hydrodynamic codes nym and petra shows very good agreement with the experimental results.« less
NASA Astrophysics Data System (ADS)
Badavi, Francis F.; Blattnig, Steve R.; Atwell, William; Nealy, John E.; Norman, Ryan B.
2011-02-01
A Langley research center (LaRC) developed deterministic suite of radiation transport codes describing the propagation of electron, photon, proton and heavy ion in condensed media is used to simulate the exposure from the spectral distribution of the aforementioned particles in the Jovian radiation environment. Based on the measurements by the Galileo probe (1995-2003) heavy ion counter (HIC), the choice of trapped heavy ions is limited to carbon, oxygen and sulfur (COS). The deterministic particle transport suite consists of a coupled electron photon algorithm (CEPTRN) and a coupled light heavy ion algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means to the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, photon, proton and heavy ion exposure assessment in a complex space structure. In this paper, the reference radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron and proton spectra of the Jovian environment as generated by the jet propulsion laboratory (JPL) Galileo interim radiation electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter system mission (EJSM), the JPL provided Europa mission fluence spectrum, is used to produce the corresponding depth dose curve in silicon behind a default aluminum shield of 100 mils (˜0.7 g/cm2). The transport suite can also accept a geometry describing ray traced thickness file from a computer aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point within the interior of the vehicle. In that regard, using a low fidelity CAD model of the Galileo probe generated by the authors, the transport suite was verified versus Monte Carlo (MC) simulation for orbits JOI-J35 of the Galileo probe extended mission. For the upcoming EJSM mission with an expected launch date of 2020, the transport suite is used to compute the depth dose profile for the traditional aluminum silicon as a standard shield target combination, as well as simulating the shielding response of a high charge number (Z) material such as tantalum (Ta). Finally, a shield optimization algorithm is discussed which can guide the instrument designers and fabrication personnel with the choice of graded-Z shield selection and analysis.
NASA Astrophysics Data System (ADS)
Fiorella, R.; Poulsen, C. J.
2013-12-01
The enigmatic Neoproterozoic geological record suggests the potential for a fully glaciated 'Snowball Earth.' Low-latitude continental position has been invoked as a potential Snowball Earth trigger by increasing surface albedo and decreasing atmospheric CO2 concentrations through increased silicate weathering. Herein, climate response to reduction of total solar irradiance (TSI) and CO2 concentration is tested using four different land configurations (aquaplanet, modern, Neoproterozoic, and low-latitude supercontinent) with uniform topography in the NCAR Community Atmosphere Model (CAM, version 3.1) GCM with a mixed-layer ocean. Despite a lower global mean surface albedo at 100% TSI for the aquaplanet scenario, the threshold for global glaciation decreases from 92% TSI in the aquaplanet configuration to 85% TSI with a low-latitude supercontinent. Climate sensitivity, as measured by the equilibrium temperature response to TSI and CO2 changes, varied across all four geographies at each forcing pair. The range of sensitivities observed suggests that climate feedback strengths are strongly dependent on both paleogeography and forcing. To identify the mechanisms responsible for the observed breadth in climate sensitivities, we calculate radiative kernels for four different TSI and CO2 forcing pairs in order to assess the strengths of the water vapor, albedo, lapse rate, Planck, and cloud feedbacks and how they vary with both forcing and paleogeography. Radiative kernels are calculated using an uncoupled version of the CAM3.1 radiation code and then perturbing climate fields of interest (surface albedo, specific humidity, and temperature) by a standard amount. No cloud kernels are calculated; instead, the cloud feedback is calculated by correcting the change in cloud radiative forcing to account for cloud masking. We find that paleogeography strongly controls how the water vapor and lapse rate feedbacks respond to different forcings. In particular, low latitude continents diminish the change in water vapor feedback strengths resulting from changes in forcing. Continental heating intensifies the Walker circulation, enhancing surface evaporation and moistening the marine troposphere. Additionally, dehumidification of the troposphere over large tropical continents in CAM3.1 increases direct heating by decreasing cloud cover. As a result, in the absence of potential silicate weathering feedbacks, large tropical landmasses raise the barrier to initiation of Snowball events. More generally, these simulations demonstrate the substantial influence of geography on climate sensitivity and climate feedback mechanisms, and challenge the notion that reduced continental area early in Earth history might provide a solution to the Faint Young Sun Paradox.
Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Technical Reports Server (NTRS)
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph;
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566
Regional climate change predictions from the Goddard Institute for Space Studies high resolution GCM
NASA Technical Reports Server (NTRS)
Crane, Robert G.; Hewitson, B. C.
1991-01-01
A new diagnostic tool is developed for examining relationships between the synoptic scale circulation and regional temperature distributions in GCMs. The 4 x 5 deg GISS GCM is shown to produce accurate simulations of the variance in the synoptic scale sea level pressure distribution over the U.S. An analysis of the observational data set from the National Meteorological Center (NMC) also shows a strong relationship between the synoptic circulation and grid point temperatures. This relationship is demonstrated by deriving transfer functions between a time-series of circulation parameters and temperatures at individual grid points. The circulation parameters are derived using rotated principal components analysis, and the temperature transfer functions are based on multivariate polynomial regression models. The application of these transfer functions to the GCM circulation indicates that there is considerable spatial bias present in the GCM temperature distributions. The transfer functions are also used to indicate the possible changes in U.S. regional temperatures that could result from differences in synoptic scale circulation between a 1XCO2 and a 2xCO2 climate, using a doubled CO2 version of the same GISS GCM.
Global environmental effects of impact-generated aerosols: Results from a general circulation model
NASA Technical Reports Server (NTRS)
Covey, Curt; Ghan, Steven J.; Walton, John J.; Weissman, Paul R.
1989-01-01
Interception of sunlight by the high altitude worldwide dust cloud generated by impact of a large asteroid or comet would lead to substantial land surface cooling, according to the three-dimensional atmospheric general circulation model (GCM). This result is qualitatively similar to conclusions drawn from an earlier study that employed a one-dimensional atmospheric model, but in the GCM simulation the heat capacity of the oceans, not included in the one-dimensional model, substantially mitigates land surface cooling. On the other hand, the low heat capacity of the GCM's land surface allows temperatures to drop more rapidly in the initial stages of cooling than in the one-dimensional model study. GCM-simulated climatic changes in the scenario of asteroid/comet winter are more severe than in nuclear winter because the assumed aerosol amount is large enough to intercept all sunlight falling on earth. Impacts of smaller objects could also lead to dramatic, though of course less severe, climatic changes, according to the GCM. An asteroid or comet impact would not lead to anything approaching complete global freezing, but quite reasonable to assume that impacts would dramatically alter the climate in at least a patchy sense.
NASA Astrophysics Data System (ADS)
Blecic, Jasmina; Harrington, Joseph; Bowman, Matthew O.; Cubillos, Patricio E.; Stemm, Madison; Foster, Andrew
2014-11-01
We present a new, open-source, Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. TEA uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. It initializes the radiative-transfer calculation in our Bayesian Atmospheric Radiative Transfer (BART) code. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA is written in Python and is available to the community via the open-source development site GitHub.com. We also present BART applied to eclipse depths of WASP-43b exoplanet, constraining atmospheric thermal and chemical parameters. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
New Parallel computing framework for radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kostin, M.A.; /Michigan State U., NSCL; Mokhov, N.V.
A new parallel computing framework has been developed to use with general-purpose radiation transport codes. The framework was implemented as a C++ module that uses MPI for message passing. The module is significantly independent of radiation transport codes it can be used with, and is connected to the codes by means of a number of interface functions. The framework was integrated with the MARS15 code, and an effort is under way to deploy it in PHITS. Besides the parallel computing functionality, the framework offers a checkpoint facility that allows restarting calculations with a saved checkpoint file. The checkpoint facility canmore » be used in single process calculations as well as in the parallel regime. Several checkpoint files can be merged into one thus combining results of several calculations. The framework also corrects some of the known problems with the scheduling and load balancing found in the original implementations of the parallel computing functionality in MARS15 and PHITS. The framework can be used efficiently on homogeneous systems and networks of workstations, where the interference from the other users is possible.« less
Shaping electromagnetic waves using software-automatically-designed metasurfaces.
Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie
2017-06-15
We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.
New high performing scintillators: RbSr2Br5:Eu and RbSr2I5:Eu
NASA Astrophysics Data System (ADS)
Stand, L.; Zhuravleva, M.; Johnson, J.; Koschan, M.; Lukosi, E.; Melcher, C. L.
2017-11-01
We report the crystal growth and scintillation properties of two new ternary metal halide scintillators, RbSr2Br5 and RbSr2I5, activated with divalent europium. Transparent 7 mm diameter single crystals with 2.5% Eu2+ were grown in evacuated quartz ampoules via the Bridgman technique. RbSr2Br5 and RbSr2I5 have monoclinic crystal structures with densities of 4.18 g/cm3 and 4.55 g/cm3 respectively. These materials are hygroscopic and have some intrinsic radioactivity due to the presence of 87Rb. Luminescence properties typical of the 5d-4f radiative transition in Eu2+ were observed. The X-ray excited emissions consisted of singular peaks centered at 429 nm for RbSr2Br5:Eu 2.5% and 445 nm for RbSr2I4:Eu 2.5%. RbSr2Br5:Eu 2.5% had a light yield of 64,700 photons/MeV, with an energy resolution of 4.0%, and RbSr2I5:Eu 2.5% had a light yield of 90,400 ph/MeV with an energy resolution of 3.0% at 662 keV. Both crystals have an excellent proportional response over a wide range of gamma-ray energies.
Global Carbon Cycle Modeling in GISS ModelE2 GCM
NASA Astrophysics Data System (ADS)
Aleinov, I. D.; Kiang, N. Y.; Romanou, A.; Romanski, J.
2014-12-01
Consistent and accurate modeling of the Global Carbon Cycle remains one of the main challenges for the Earth System Models. NASA Goddard Institute for Space Studies (GISS) ModelE2 General Circulation Model (GCM) was recently equipped with a complete Global Carbon Cycle algorithm, consisting of three integrated components: Ent Terrestrial Biosphere Model (Ent TBM), Ocean Biogeochemistry Module and atmospheric CO2 tracer. Ent TBM provides CO2 fluxes from the land surface to the atmosphere. Its biophysics utilizes the well-known photosynthesis functions of Farqhuar, von Caemmerer, and Berry and Farqhuar and von Caemmerer, and stomatal conductance of Ball and Berry. Its phenology is based on temperature, drought, and radiation fluxes, and growth is controlled via allocation of carbon from labile carbohydrate reserve storage to different plant components. Soil biogeochemistry is based on the Carnegie-Ames-Stanford (CASA) model of Potter et al. Ocean biogeochemistry module (the NASA Ocean Biogeochemistry Model, NOBM), computes prognostic distributions for biotic and abiotic fields that influence the air-sea flux of CO2 and the deep ocean carbon transport and storage. Atmospheric CO2 is advected with a quadratic upstream algorithm implemented in atmospheric part of ModelE2. Here we present the results for pre-industrial equilibrium and modern transient simulations and provide comparison to available observations. We also discuss the process of validation and tuning of particular algorithms used in the model.
Realistic dust and water cycles in the MarsWRF GCM using coupled two-moment microphysics
NASA Astrophysics Data System (ADS)
Lee, Christopher; Richardson, Mark Ian; Mischna, Michael A.; Newman, Claire E.
2017-10-01
Dust and water ice aerosols significantly complicate the Martian climate system because the evolution of the two aerosol fields is coupled through microphysics and because both aerosols strongly interact with visible and thermal radiation. The combination of strong forcing feedback and coupling has led to various problems in understanding and modeling of the Martian climate: in reconciling cloud abundances at different locations in the atmosphere, in generating a stable dust cycle, and in preventing numerical instability within models.Using a new microphysics model inside the MarsWRF GCM we show that fully coupled simulations produce more realistic simulation of the Martian climate system compared to a dry, dust only simulations. In the coupled simulations, interannual variability and intra-annual variability are increased, strong 'solstitial pause' features are produced in both winter high latitude regions, and dust storm seasons are more varied, with early southern summer (Ls 180) dust storms and/or more than one storm occurring in some seasons.A new microphysics scheme was developed as a part of this work and has been included in the MarsWRF model. The scheme uses split spectral/spatial size distribution numerics with adaptive bin sizes to track particle size evolution. Significantly, this scheme is highly accurate, numerically stable, and is capable of running with time steps commensurate with those of the parent atmospheric model.
The interpretation of remotely sensed cloud properties from a model paramterization perspective
NASA Technical Reports Server (NTRS)
HARSHVARDHAN; Wielicki, Bruce A.; Ginger, Kathryn M.
1994-01-01
A study has been made of the relationship between mean cloud radiative properties and cloud fraction in stratocumulus cloud systems. The analysis is of several Land Resources Satellite System (LANDSAT) images and three hourly International Satellite Cloud Climatology Project (ISCCP) C-1 data during daylight hours for two grid boxes covering an area typical of a general circulation model (GCM) grid increment. Cloud properties were inferred from the LANDSAT images using two thresholds and several pixel resolutions ranging from roughly 0.0625 km to 8 km. At the finest resolution, the analysis shows that mean cloud optical depth (or liquid water path) increases somewhat with increasing cloud fraction up to 20% cloud coverage. More striking, however, is the lack of correlation between the two quantities for cloud fractions between roughly 0.2 and 0.8. When the scene is essentially overcast, the mean cloud optical tends to be higher. Coarse resolution LANDSAT analysis and the ISCCP 8-km data show lack of correlation between mean cloud optical depth and cloud fraction for coverage less than about 90%. This study shows that there is perhaps a local mean liquid water path (LWP) associated with partly cloudy areas of stratocumulus clouds. A method has been suggested to use this property to construct the cloud fraction paramterization in a GCM when the model computes a grid-box-mean LWP.
NASA Technical Reports Server (NTRS)
Kahre, Melinda A.; Hollingsworth, Jeffery
2012-01-01
The dust cycle is a critically important component of Mars' current climate system. Dust is present in the atmosphere of Mars year-round but the dust loading varies with season in a generally repeatable manner. Dust has a significant influence on the thermal structure of the atmosphere and thus greatly affects atmospheric circulation. The dust cycle is the most difficult of the three climate cycles (CO2, water, and dust) to model realistically with general circulation models. Until recently, numerical modeling investigations of the dust cycle have typically not included the effects of couplings to the water cycle through cloud formation. In the Martian atmosphere, dust particles likely provide the seed nuclei for heterogeneous nucleation of water ice clouds. As ice coats atmospheric dust grains, the newly formed cloud particles exhibit different physical and radiative characteristics. Thus, the coupling between the dust and water cycles likely affects the distributions of dust, water vapor and water ice, and thus atmospheric heating and cooling and the resulting circulations. We use the NASA Ames Mars GCM to investigate the effects of radiatively active water ice clouds on surface stress and the potential for dust lifting. The model includes a state-of-the-art water ice cloud microphysics package and a radiative transfer scheme that accounts for the radiative effects of CO2 gas, dust, and water ice clouds. We focus on simulations that are radiatively forced by a prescribed dust map, and we compare simulations that do and do not include radiatively active clouds. Preliminary results suggest that the magnitude and spatial patterns of surface stress (and thus dust lifting potential) are substantial influenced by the radiative effects of water ice clouds.
NASA Astrophysics Data System (ADS)
Duribreux, I.; Saadi, M.; Obbade, S.; Dion, C.; Abraham, F.
2003-05-01
Two new alkali uranyl oxychloro vanadates M7(UO 2) 8(VO 4) 2O 8Cl with M=Rb, Cs, have been synthesized by solid-state reactions and their structures determined from single-crystal X-ray diffraction data. They crystallize in the orthorhombic system with space groups Pmcn and Pmmn, respectively. The a and b unit cell parameters are almost identical in both compounds while the c parameter in the Rb compound is doubled: Rb— a=21.427(5) Å, b=11.814(3) Å, c=14.203(3) Å, V=3595.1(1) Å 3, Z=4, ρmes=5.93(2) g/cm 3, ρcal=5.82(1) g/cm 3; Cs— a=21.458(3) Å, b=11.773(2) Å, c=7.495(1) Å, V=1893.6(5) Å 3, Z=2, ρmes=6.09(2) g/cm 3, ρcal=6.11(1) g/cm 3. A full-matrix least-squares refinement yielded R1=0.0221, w R2=0.0562 for 2675 independent reflections and R1=0.0386, w R2=0.1042 for 2446 independent reflections, for the Rb and Cs compounds, respectively. Data were collected with Mo( Kα) radiation and a charge coupled device (CCD) detector of a Bruker diffractometer. Both structures are characterized by [(UO 2) 8(VO 4) 2O 8Cl] n7 n- layers parallel to the (001) plane. The layers are built up from VO 4 tetrahedra, UO 7 and UO 6Cl pentagonal bipyramids, and UO 6 distorded octahedra. The UO 7 and UO 6Cl pentagonal bipyramids are associated by sharing opposite equatorial edges to form infinite chains (UO 5-UO 4Cl-UO 5) n parallel to the a axis. These chains are linked together by VO 4 tetrahedra, UO 6 octahedra, UO 7 corner sharing and UO 6Cl, Cl sharing. Both structures differ simply by the symmetry of the layers. The unit cell contains one centrosymmetric layer in the Cs compound, whereas in the two-layer unit cell of the Rb compound, two non-centrosymmetric consecutive layers are related by an inversion center. The layers appear to be held together by the alkali ions. The mobility of the M+ ions within the interlayer space in M7(UO 2) 8(VO 4) 2O 8Cl and carnotite analog compounds is compared.
NASA Technical Reports Server (NTRS)
Stanfield, Ryan E.; Dong, Xiquan; Xi, Baike; Kennedy, Aaron; Del Genio, Anthony D.; Minnia, Patrick; Jiang, Jonathan H.
2014-01-01
Although many improvements have been made in phase 5 of the Coupled Model Intercomparison Project (CMIP5), clouds remain a significant source of uncertainty in general circulation models (GCMs) because their structural and optical properties are strongly dependent upon interactions between aerosol/cloud microphysics and dynamics that are unresolved in such models. Recent changes to the planetary boundary layer (PBL) turbulence and moist convection parameterizations in the NASA GISS Model E2 atmospheric GCM(post-CMIP5, hereafter P5) have improved cloud simulations significantly compared to its CMIP5 (hereafter C5) predecessor. A study has been performed to evaluate these changes between the P5 and C5 versions of the GCM, both of which used prescribed sea surface temperatures. P5 and C5 simulated cloud fraction (CF), liquid water path (LWP), ice water path (IWP), cloud water path (CWP), precipitable water vapor (PWV), and relative humidity (RH) have been compared to multiple satellite observations including the Clouds and the Earth's Radiant Energy System-Moderate Resolution Imaging Spectroradiometer (CERES-MODIS, hereafter CM), CloudSat- Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO; hereafter CC), Atmospheric Infrared Sounder (AIRS), and Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E). Although some improvements are observed in the P5 simulation on a global scale, large improvements have been found over the southern midlatitudes (SMLs), where correlations increased and both bias and root-mean-square error (RMSE) significantly decreased, in relation to the previous C5 simulation, when compared to observations. Changes to the PBL scheme have resulted in improved total column CFs, particularly over the SMLs where marine boundary layer (MBL) CFs have increased by nearly 20% relative to the previous C5 simulation. Globally, the P5 simulated CWPs are 25 gm22 lower than the previous C5 results. The P5 version of the GCM simulates PWV and RH higher than its C5 counterpart and agrees well with the AMSR-E and AIRS observations. The moister atmospheric conditions simulated by P5 are consistent with the CF comparison and provide a strong support for the increase in MBL clouds over the SMLs. Over the tropics, the P5 version of the GCM simulated total column CFs and CWPs are slightly lower than the previous C5 results, primarily as a result of the shallower tropical boundary layer in P5 relative to C5 in regions outside the marine stratocumulus decks.
A novel conversion method for radiographic guide into surgical guide.
Peng, Yao-Te; Tseng, Chung-Chih; Du, Yi-Chun; Chen, Yen-Nien; Chang, Chih-Han
2017-06-01
The study proposed a novel method for converting a radiographic guide into a surgical guide and evaluated its accuracy. Radiographic guide was reformed with the addition of index rods for geometric conversion method (GCM). Planning implants were projected on geometric projection planes, and the implant positions were measured. The radiographic guide was converted into surgical guide using a generic bench drill machine with GCM data. Two experiments were designed to validate the GCM. (1) In vitro test: Twenty implants were placed on five edentulous dental models by using the GCM (group 1) and Stereolithography (SLA) method (group 2), respectively. The deviations of planned and placed implant were calculated, and the precision error (PE) value was calculated to evaluate the stability of the GCM and SLA. (2) In vivo test: Nine edentulous subjects were selected for clinical implant surgery with the GCM guide. Two level of the index rods of radiographic guides were prepared for surgical guides forming. The differences between the planned and actual implants were calculated in implant head, apex, and angulation. The in vitro test revealed no significant differences in the planned and placed angulations between groups 1 and 2 (P > .05). The PE was not significantly different between groups 1 and 2 (P > .05). The in vivo test revealed a successful treatment of the subjects, and 16 implant sites were evaluated. The results indicated that GCM guide could achieve the three-dimensional (3D) offset deviations of 1.03 ± 0.27 mm and 1.17 ± 0.24 mm at the implant head and apex, respectively, and 1.37° ± 0.21° for the 3D angulation. The novel method for converting a radiographic guide into a surgical guide appears accurate and stable compared with SLA. © 2017 Wiley Periodicals, Inc.
Main functions, recent updates, and applications of Synchrotron Radiation Workshop code
NASA Astrophysics Data System (ADS)
Chubar, Oleg; Rakitin, Maksim; Chen-Wiegart, Yu-Chen Karen; Chu, Yong S.; Fluerasu, Andrei; Hidas, Dean; Wiegart, Lutz
2017-08-01
The paper presents an overview of the main functions and new application examples of the "Synchrotron Radiation Workshop" (SRW) code. SRW supports high-accuracy calculations of different types of synchrotron radiation, and simulations of propagation of fully-coherent radiation wavefronts, partially-coherent radiation from a finite-emittance electron beam of a storage ring source, and time-/frequency-dependent radiation pulses of a free-electron laser, through X-ray optical elements of a beamline. An extended library of physical-optics "propagators" for different types of reflective, refractive and diffractive X-ray optics with its typical imperfections, implemented in SRW, enable simulation of practically any X-ray beamline in a modern light source facility. The high accuracy of calculation methods used in SRW allows for multiple applications of this code, not only in the area of development of instruments and beamlines for new light source facilities, but also in areas such as electron beam diagnostics, commissioning and performance benchmarking of insertion devices and individual X-ray optical elements of beamlines. Applications of SRW in these areas, facilitating development and advanced commissioning of beamlines at the National Synchrotron Light Source II (NSLS-II), are described.
Zhang, Kun; Tang, Wenhui; Fu, Kunkun
2018-01-16
Carbon fiber-reinforced polymer (CFRP) composites have been increasingly used in spacecraft applications. Spacecraft may encounter highenergy-density X-ray radiation in outer space that can cause severe damage. To protect spacecraft from such unexpected damage, it is essential to predict the dynamic behavior of CFRP composites under X-ray radiation. In this study, we developed an in-house three-dimensional explicit finite element (FEM) code to investigate the dynamic responses of CFRP composite under X-ray radiation for the first time, by incorporating a modified PUFF equation-of-state. First, the blow-off impulse (BOI) momentum of an aluminum panel was predicted by our FEM code and compared with an existing radiation experiment. Then, the FEM code was utilized to determine the dynamic behavior of a CFRP composite under various radiation conditions. It was found that the numerical result was comparable with the experimental one. Furthermore, the CFRP composite was more effective than the aluminum panel in reducing radiation-induced pressure and BOI momentum. The numerical results also revealed that a 1 keV X-ray led to vaporization of surface materials and a high-magnitude compressive stress wave, whereas a low-magnitude stress wave was generated with no surface vaporization when a 3 keV X-ray was applied.
Zhang, Kun; Tang, Wenhui; Fu, Kunkun
2018-01-01
Carbon fiber-reinforced polymer (CFRP) composites have been increasingly used in spacecraft applications. Spacecraft may encounter highenergy-density X-ray radiation in outer space that can cause severe damage. To protect spacecraft from such unexpected damage, it is essential to predict the dynamic behavior of CFRP composites under X-ray radiation. In this study, we developed an in-house three-dimensional explicit finite element (FEM) code to investigate the dynamic responses of CFRP composite under X-ray radiation for the first time, by incorporating a modified PUFF equation-of-state. First, the blow-off impulse (BOI) momentum of an aluminum panel was predicted by our FEM code and compared with an existing radiation experiment. Then, the FEM code was utilized to determine the dynamic behavior of a CFRP composite under various radiation conditions. It was found that the numerical result was comparable with the experimental one. Furthermore, the CFRP composite was more effective than the aluminum panel in reducing radiation-induced pressure and BOI momentum. The numerical results also revealed that a 1 keV X-ray led to vaporization of surface materials and a high-magnitude compressive stress wave, whereas a low-magnitude stress wave was generated with no surface vaporization when a 3 keV X-ray was applied. PMID:29337891
Understanding Accretion Disks through Three Dimensional Radiation MHD Simulations
NASA Astrophysics Data System (ADS)
Jiang, Yan-Fei
I study the structures and thermal properties of black hole accretion disks in the radiation pressure dominated regime. Angular momentum transfer in the disk is provided by the turbulence generated by the magneto-rotational instability (MRI), which is calculated self-consistently with a recently developed 3D radiation magneto-hydrodynamics (MHD) code based on Athena. This code, developed by my collaborators and myself, couples both the radiation momentum and energy source terms with the ideal MHD equations by modifying the standard Godunov method to handle the stiff radiation source terms. We solve the two momentum equations of the radiation transfer equations with a variable Eddington tensor (VET), which is calculated with a time independent short characteristic module. This code is well tested and accurate in both optically thin and optically thick regimes. It is also accurate for both radiation pressure and gas pressure dominated flows. With this code, I find that when photon viscosity becomes significant, the ratio between Maxwell stress and Reynolds stress from the MRI turbulence can increase significantly with radiation pressure. The thermal instability of the radiation pressure dominated disk is then studied with vertically stratified shearing box simulations. Unlike the previous results claiming that the radiation pressure dominated disk with MRI turbulence can reach a steady state without showing any unstable behavior, I find that the radiation pressure dominated disks always either collapse or expand until we have to stop the simulations. During the thermal runaway, the heating and cooling rates from the simulations are consistent with the general criterion of thermal instability. However, details of the thermal runaway are different from the predictions of the standard alpha disk model, as many assumptions in that model are not satisfied in the simulations. We also identify the key reasons why previous simulations do not find the instability. The thermal instability has many important implications for understanding the observations of both X-ray binaries and Active Galactic Nuclei (AGNs). However, direct comparisons between observations and the simulations require global radiation MHD simulations, which will be the main focus of my future work.
NASA Technical Reports Server (NTRS)
Helfand, H. M.
1985-01-01
Methods being used to increase the horizontal and vertical resolution and to implement more sophisticated parameterization schemes for general circulation models (GCM) run on newer, more powerful computers are described. Attention is focused on the NASA-Goddard Laboratory for Atmospherics fourth order GCM. A new planetary boundary layer (PBL) model has been developed which features explicit resolution of two or more layers. Numerical models are presented for parameterizing the turbulent vertical heat, momentum and moisture fluxes at the earth's surface and between the layers in the PBL model. An extended Monin-Obhukov similarity scheme is applied to express the relationships between the lowest levels of the GCM and the surface fluxes. On-line weather prediction experiments are to be run to test the effects of the higher resolution thereby obtained for dynamic atmospheric processes.
Yilmaz, Mehmet; Isaoglu, Unal; Uslu, Turan; Yildirim, Kadir; Seven, Bedri; Akcay, Fatih; Hacimuftuoglu, Ahmet
2013-01-01
Objectives: In this study, effect of methylprednisolone on bone mineral density (BMD) was investigated in rats with overiectomy induced bone lose and suppressed endogenous adrenalin levels, and compared to alendronate. Materials and Methods: Severity of bone loss in the examined material (femur bones) was evaluated by BMD measurement. Results: The group with the highest BMD value was metyrosinemetyrosine + methylprednisolone combination (0.151 g/cm2), while that with the lowest BMD was methylprednisolone (0.123 g/cm2). Alendronate was effective only when used alone in ovariectomized rats (0.144 g/cm2), but not when used in combination with methylprednisolone (0.124 g/cm2). In the ovariectomized rat group which received only metyrosine, BMD value was statistically indifferent from ovariectomized control group. Conclusions: Methylprednisolone protected bone loss in rats with suppressed adrenaline levels because of metyrosinemetyrosine. PMID:24014908
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modest, Michael
The effects of radiation in particle-laden flows were the object of the present research. The presence of particles increases optical thickness substantially, making the use of the “optically thin” approximation in most cases a very poor assumption. However, since radiation fluxes peak at intermediate optical thicknesses, overall radiative effects may not necessarily be stronger than in gas combustion. Also, the spectral behavior of particle radiation properties is much more benign, making spectral models simpler (and making the assumption of a gray radiator halfway acceptable, at least for fluidized beds when gas radiation is not large). On the other hand, particlesmore » scatter radiation, making the radiative transfer equation (RTE) much more di fficult to solve. The research carried out in this project encompassed three general areas: (i) assessment of relevant radiation properties of particle clouds encountered in fluidized bed and pulverized coal combustors, (ii) development of proper spectral models for gas–particulate mixtures for various types of two-phase combustion flows, and (iii) development of a Radiative Transfer Equation (RTE) solution module for such applications. The resulting models were validated against artificial cases since open literature experimental data were not available. The final models are in modular form tailored toward maximum portability, and were incorporated into two research codes: (i) the open-source CFD code OpenFOAM, which we have extensively used in our previous work, and (ii) the open-source multi-phase flow code MFIX, which is maintained by NETL.« less
Importance biasing scheme implemented in the PRIZMA code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandiev, I.Z.; Malyshkin, G.N.
1997-12-31
PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Harrington, Joseph; Blecic, Jasmina; Stemm, Madison M.; Lust, Nate B.; Foster, Andrew S.; Rojo, Patricio M.; Loredo, Thomas J.
2014-11-01
Multi-wavelength secondary-eclipse and transit depths probe the thermo-chemical properties of exoplanets. In recent years, several research groups have developed retrieval codes to analyze the existing data and study the prospects of future facilities. However, the scientific community has limited access to these packages. Here we premiere the open-source Bayesian Atmospheric Radiative Transfer (BART) code. We discuss the key aspects of the radiative-transfer algorithm and the statistical package. The radiation code includes line databases for all HITRAN molecules, high-temperature H2O, TiO, and VO, and includes a preprocessor for adding additional line databases without recompiling the radiation code. Collision-induced absorption lines are available for H2-H2 and H2-He. The parameterized thermal and molecular abundance profiles can be modified arbitrarily without recompilation. The generated spectra are integrated over arbitrary bandpasses for comparison to data. BART's statistical package, Multi-core Markov-chain Monte Carlo (MC3), is a general-purpose MCMC module. MC3 implements the Differental-evolution Markov-chain Monte Carlo algorithm (ter Braak 2006, 2009). MC3 converges 20-400 times faster than the usual Metropolis-Hastings MCMC algorithm, and in addition uses the Message Passing Interface (MPI) to parallelize the MCMC chains. We apply the BART retrieval code to the HD 209458b data set to estimate the planet's temperature profile and molecular abundances. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.
Numerical Tests and Properties of Waves in Radiating Fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, B M; Klein, R I
2009-09-03
We discuss the properties of an analytical solution for waves in radiating fluids, with a view towards its implementation as a quantitative test of radiation hydrodynamics codes. A homogeneous radiating fluid in local thermodynamic equilibrium is periodically driven at the boundary of a one-dimensional domain, and the solution describes the propagation of the waves thus excited. Two modes are excited for a given driving frequency, generally referred to as a radiative acoustic wave and a radiative diffusion wave. While the analytical solution is well known, several features are highlighted here that require care during its numerical implementation. We compare themore » solution in a wide range of parameter space to a numerical integration with a Lagrangian radiation hydrodynamics code. Our most significant observation is that flux-limited diffusion does not preserve causality for waves on a homogeneous background.« less
NASA Technical Reports Server (NTRS)
Reddell, Brandon
2015-01-01
Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.
A unified radiative magnetohydrodynamics code for lightning-like discharge simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qiang, E-mail: cq0405@126.com; Chen, Bin, E-mail: emcchen@163.com; Xiong, Run
2014-03-15
A two-dimensional Eulerian finite difference code is developed for solving the non-ideal magnetohydrodynamic (MHD) equations including the effects of self-consistent magnetic field, thermal conduction, resistivity, gravity, and radiation transfer, which when combined with specified pulse current models and plasma equations of state, can be used as a unified lightning return stroke solver. The differential equations are written in the covariant form in the cylindrical geometry and kept in the conservative form which enables some high-accuracy shock capturing schemes to be equipped in the lightning channel configuration naturally. In this code, the 5-order weighted essentially non-oscillatory scheme combined with Lax-Friedrichs fluxmore » splitting method is introduced for computing the convection terms of the MHD equations. The 3-order total variation diminishing Runge-Kutta integral operator is also equipped to keep the time-space accuracy of consistency. The numerical algorithms for non-ideal terms, e.g., artificial viscosity, resistivity, and thermal conduction, are introduced in the code via operator splitting method. This code assumes the radiation is in local thermodynamic equilibrium with plasma components and the flux limited diffusion algorithm with grey opacities is implemented for computing the radiation transfer. The transport coefficients and equation of state in this code are obtained from detailed particle population distribution calculation, which makes the numerical model is self-consistent. This code is systematically validated via the Sedov blast solutions and then used for lightning return stroke simulations with the peak current being 20 kA, 30 kA, and 40 kA, respectively. The results show that this numerical model consistent with observations and previous numerical results. The population distribution evolution and energy conservation problems are also discussed.« less
The Helicopter Antenna Radiation Prediction Code (HARP)
NASA Technical Reports Server (NTRS)
Klevenow, F. T.; Lynch, B. G.; Newman, E. H.; Rojas, R. G.; Scheick, J. T.; Shamansky, H. T.; Sze, K. Y.
1990-01-01
The first nine months effort in the development of a user oriented computer code, referred to as the HARP code, for analyzing the radiation from helicopter antennas is described. The HARP code uses modern computer graphics to aid in the description and display of the helicopter geometry. At low frequencies the helicopter is modeled by polygonal plates, and the method of moments is used to compute the desired patterns. At high frequencies the helicopter is modeled by a composite ellipsoid and flat plates, and computations are made using the geometrical theory of diffraction. The HARP code will provide a user friendly interface, employing modern computer graphics, to aid the user to describe the helicopter geometry, select the method of computation, construct the desired high or low frequency model, and display the results.
VizieR Online Data Catalog: Radiative forces for stellar envelopes (Seaton, 1997)
NASA Astrophysics Data System (ADS)
Seaton, M. J.; Yan, Y.; Mihalas, D.; Pradhan, A. K.
2000-02-01
(1) Primary data files, stages.zz These files give data for the calculation of radiative accelerations, GRAD, for elements with nuclear charge zz. Data are available for zz=06, 07, 08, 10, 11, 12, 13, 14, 16, 18, 20, 24, 25, 26 and 28. Calculations are made using data from the Opacity Project (see papers SYMP and IXZ). The data are given for each ionisation stage, j. They are tabulated on a mesh of (T, Ne, CHI) where T is temperature, Ne electron density and CHI is abundance multiplier. The files include data for ionisation fractions, for each (T, Ne). The file contents are described in the paper ACC and as comments in the code add.f (2) Code add.f This reads a file stages.zz and creates a file acc.zz giving radiative accelerations averaged over ionisation stages. The code prompts for names of input and output files. The code, as provided, gives equal weights (as defined in the paper ACC) to all stages. Th weights are set in SUBROUTINE WEIGHTS, which could be changed to give any weights preferred by the user. The dependence of diffusion coefficients on ionisation stage is given by a function ZET, which is defined in SUBROUTINE ZETA. The expressions used for ZET are as given in the paper. The user can change that subroutine if other expressions are preferred. The output file contains values, ZETBAR, of ZET, averaged over ionisation stages. (3) Files acc.zz Radiative accelerations computed using add.f as provided. The user will need to run the code add.f only if it is required to change the subroutines WEIGHTS or ZETA. The contents of the files acc.zz are described in the paper ACC and in comments contained in the code add.f. (4) Code accfit.f This code gives gives radiative accelerations, and some related data, for a stellar model. Methods used to interpolate data to the values of (T, RHO) for the stellar model are based on those used in the code opfit.for (see the paper OPF). The executable file accfit.com runs accfit.f. It uses a list of files given in accfit.files (see that file for further description). The mesh used for the abundance-multiplier CHI on the output file will generally be finer than that used in the input files acc.zz. The mesh to be used is specified on a file chi.dat. For a test run, the stellar model used is given in the file 10000_4.2 (Teff=10000 K, LOG10(g)=4.2) The output file from that test run is acc100004.2. The contents of the output file are described in the paper ACC and as comments in the code accfit.f. (5) The code diff.f This code reads the output file (e.g. acc1000004.2) created by accfit.f. For any specified depth point in the model and value of CHI, it gives values of radiative accelerations, the quantity ZETBAR required for calculation of diffusion coefficients, and Rosseland-mean opacities. The code prompts for input data. It creates a file recording all data calculated. The code diff.f is intended for incorporation, as a set of subroutines, in codes for diffusion calculations. (1 data file).
Simulations of Rayleigh Taylor Instabilities in the presence of a Strong Radiative shock
NASA Astrophysics Data System (ADS)
Trantham, Matthew; Kuranz, Carolyn; Shvarts, Dov; Drake, R. P.
2016-10-01
Recent Supernova Rayleigh Taylor experiments on the National Ignition Facility (NIF) are relevant to the evolution of core-collapse supernovae in which red supergiant stars explode. Here we report simulations of these experiments using the CRASH code. The CRASH code, developed at the University of Michigan to design and analyze high-energy-density experiments, is an Eulerian code with block-adaptive mesh refinement, multigroup diffusive radiation transport, and electron heat conduction. We explore two cases, one in which the shock is strongly radiative, and another with negligible radiation. The experiments in all cases produced structures at embedded interfaces by the Rayleigh Taylor instability. The weaker shocked environment is cooler and the instability grows classically. The strongly radiative shock produces a warm environment near the instability, ablates the interface, and alters the growth. We compare the simulated results with the experimental data and attempt to explain the differences. This work is funded by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0002956.
Radiation Coupling with the FUN3D Unstructured-Grid CFD Code
NASA Technical Reports Server (NTRS)
Wood, William A.
2012-01-01
The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.
SEURAT: SPH scheme extended with ultraviolet line radiative transfer
NASA Astrophysics Data System (ADS)
Abe, Makito; Suzuki, Hiroyuki; Hasegawa, Kenji; Semelin, Benoit; Yajima, Hidenobu; Umemura, Masayuki
2018-05-01
We present a novel Lyman alpha (Ly α) radiative transfer code, SEURAT (SPH scheme Extended with Ultraviolet line RAdiative Transfer), where line scatterings are solved adaptively with the resolution of the smoothed particle hydrodynamics (SPH). The radiative transfer method implemented in SEURAT is based on a Monte Carlo algorithm in which the scattering and absorption by dust are also incorporated. We perform standard test calculations to verify the validity of the code; (i) emergent spectra from a static uniform sphere, (ii) emergent spectra from an expanding uniform sphere, and (iii) escape fraction from a dusty slab. Thereby, we demonstrate that our code solves the {Ly} α radiative transfer with sufficient accuracy. We emphasize that SEURAT can treat the transfer of {Ly} α photons even in highly complex systems that have significantly inhomogeneous density fields. The high adaptivity of SEURAT is desirable to solve the propagation of {Ly} α photons in the interstellar medium of young star-forming galaxies like {Ly} α emitters (LAEs). Thus, SEURAT provides a powerful tool to model the emergent spectra of {Ly} α emission, which can be compared to the observations of LAEs.
NASA Astrophysics Data System (ADS)
Buldyrev, S.; Davis, A.; Marshak, A.; Stanley, H. E.
2001-12-01
Two-stream radiation transport models, as used in all current GCM parameterization schemes, are mathematically equivalent to ``standard'' diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. The space/time spread (technically, the Green function) of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows directly from first principles (the radiative transfer equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the ``1-g'' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as ``anomalous'' diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics literature to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of Lévy/anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM radiation parameterization.
NASA Technical Reports Server (NTRS)
Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.
2002-01-01
The variable-resolution stretched-grid (SG) GEOS (Goddard Earth Observing System) GCM has been used for limited ensemble integrations with a relatively coarse, 60 to 100 km, regional resolution over the U.S. The experiments have been run for the 12-year period, 1987-1998, that includes the recent ENSO cycles. Initial conditions 1-2 days apart are used for ensemble members. The goal of the experiments is analyzing the long-term SG-GCM ensemble integrations in terms of their potential in reducing the uncertainties of regional climate simulation while producing realistic mesoscales. The ensemble integration results are analyzed for both prognostic and diagnostic fields. A special attention is devoted to analyzing the variability of precipitation over the U.S. The internal variability of the SG-GCM has been assessed. The ensemble means appear to be closer to the verifying analyses than the individual ensemble members. The ensemble means capture realistic mesoscale patterns, especially those of induced by orography. Two ENSO cycles have been analyzed in terms their impact on the U.S. climate, especially on precipitation. The ability of the SG-GCM simulations to produce regional climate anomalies has been confirmed. However, the optimal size of the ensembles depending on fine regional resolution used, is still to be determined. The SG-GCM ensemble simulations are performed as a preparation or a preliminary stage for the international SGMIP (Stretched-Grid Model Intercomparison Project) that is under way with participation of the major centers and groups employing the SG-approach for regional climate modeling.
NASA Astrophysics Data System (ADS)
Wan, Z.; Fatimah, S.; Shahar, S.; Noor, A. C.
2017-09-01
Mixed oxides chromium based catalysts were synthesized via sol-gel method for the esterification of palm fatty acid distillate (PFAD) to produce fatty acid methyl ester (FAME). The reactions were conducted in a batch reactor at reaction temperature of 160 °C for 4 h and methanol to PFAD molar ratio of 3:1. The effects of catalyst preparation conditions which are the mixed metal ratio and calcination temperature were studied. The various metal ratio of Cr:Mn (1:0, 0:1, 1:1, 1:2 and 2:1) and Cr:Ti (0:1, 1:1, 1:2 and 2:1) resulted in FAME density ranges from 1.041 g/cm3 to 0.853 g/cm3 and 1.107 g/cm3 to 0.836 g/cm3, respectively. The best condition catalyst was found to be Cr:Ti metal ratio of 1:2 and Cr:Mn metal ratio of 1:1. The calcination temperature of the mixed oxides between 300 °C to 700°C shows effect on the FAME density obtained in the reaction. The calcination at 500°C gave the lowest FAME density of 0.836 g/cm3 and 0.853 g/cm3 for Cr:Ti and Cr:Mn mixed oxides, respectively. The density of FAME is within the value range of the biodiesel fuel property. Thus, mixed oxides of Cr-Ti and Cr-Mn have good potentials as heterogeneous catalyst for FAME synthesis from high acid value oils such as PFAD.
Aeolian dunes as ground truth for atmospheric modeling on Mars
Hayward, R.K.; Titus, T.N.; Michaels, T.I.; Fenton, L.K.; Colaprete, A.; Christensen, P.R.
2009-01-01
Martian aeolian dunes preserve a record of atmosphere/surface interaction on a variety of scales, serving as ground truth for both Global Climate Models (GCMs) and mesoscale climate models, such as the Mars Regional Atmospheric Modeling System (MRAMS). We hypothesize that the location of dune fields, expressed globally by geographic distribution and locally by dune centroid azimuth (DCA), may record the long-term integration of atmospheric activity across a broad area, preserving GCM-scale atmospheric trends. In contrast, individual dune morphology, as expressed in slipface orientation (SF), may be more sensitive to localized variations in circulation, preserving topographically controlled mesoscale trends. We test this hypothesis by comparing the geographic distribution, DCA, and SF of dunes with output from the Ames Mars GCM and, at a local study site, with output from MRAMS. When compared to the GCM: 1) dunes generally lie adjacent to areas with strongest winds, 2) DCA agrees fairly well with GCM modeled wind directions in smooth-floored craters, and 3) SF does not agree well with GCM modeled wind directions. When compared to MRAMS modeled winds at our study site: 1) DCA generally coincides with the part of the crater where modeled mean winds are weak, and 2) SFs are consistent with some weak, topographically influenced modeled winds. We conclude that: 1) geographic distribution may be valuable as ground truth for GCMs, 2) DCA may be useful as ground truth for both GCM and mesoscale models, and 3) SF may be useful as ground truth for mesoscale models. Copyright 2009 by the American Geophysical Union.
The role and contributions of geriatric care managers: care recipients' views.
Ortiz, Judith; Horne, Mary Ann
2013-01-01
To assess the value of geriatric care management (GCM) services from the perspective of individuals who receive the care--the "care recipients." The opinions of these older adults-the current users of GCM services--were investigated by means of a cross-sectional mail survey. The study setting was the home of the care recipient of GCM services. This cross-sectional descriptive study applied survey research design. Survey questions were developed related to the following themes about the GCM role and function: (1) overall role, (2) health assistance function, (3) community resources assistance function, (4) advocacy function, and (5) contribution to the care recipients' quality of life. Survey questionnaires were distributed to 179 care recipients of member organizations of the Florida Geriatric Care Management Association. The questionnaires were distributed by mail during the spring of 2012. A second mailing was completed in the fall of 2012. The survey results were analyzed using descriptive statistics. The care recipient survey respondents most frequently described the role of their GCMs as one of a health care professional. The respondents more frequently described the GCM as providing a health assistance and advocacy function. They indicated that the GCM greatly contributed to their quality of life. Geriatric care managers appear to be very valuable in assisting their clients with critical health-related situations, as well as with more routine health care matters. Not only are they called upon to assist with health care emergencies and their clients' hospital stays but they also appear to serve an important role in facilitating physician-patient communications during the care recipient's routine visits to the doctor's office.
Schmidt, Mariane; Priemé, Anders; Stougaard, Peter
2007-04-01
A novel aerobic, Gram-negative, non-pigmented bacterium, GCM72(T), was isolated from the alkaline, low-saline ikaite columns in the Ikka Fjord, SW Greenland. Strain GCM72(T) is a motile, non-pigmented, amylase- and protease-producing, oxidase-positive, and catalase-negative bacterium, showing optimal growth at pH 9.2-10.0, at 15 degrees C, and at 3% (w/v) NaCl. Major fatty acids were C(12:0) 3-OH (12.2+/-0.1%), C(16:00) (18.0+/-0.1%), C(18:1)omega7c (10.7+/-0.5%), and summed feature 3 comprising C(16:1)omega7c and/or iso-C(15:0) 2-OH (36.3+/-0.7%). Phylogenetic analysis based on 16S rRNA gene sequences showed that isolate GCM72(T) was most closely related to Rheinheimera baltica and Alishewanella fetalis of the gamma-Proteobacteria with a 93% sequence similarity to both. The G+C content of DNA isolated from GCM72(T) was 49.9mol% and DNA-DNA hybridization between GCM72T and R. baltica was 9.5%. Fatty acid analysis and G+C content supports a relationship primarily to R. baltica, but several different features, such as a negative catalase-response and optimal growth at low temperature and high pH, together with the large phylogenetic distance and low DNA similarity to its closest relatives, lead us to propose a new genus, Arsukibacterium, gen. nov., with the new species Arsukibacterium ikkense sp. nov. (type strain is GCM72(T)).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohsuga, Ken; Takahashi, Hiroyuki R.
2016-02-20
We develop a numerical scheme for solving the equations of fully special relativistic, radiation magnetohydrodynamics (MHDs), in which the frequency-integrated, time-dependent radiation transfer equation is solved to calculate the specific intensity. The radiation energy density, the radiation flux, and the radiation stress tensor are obtained by the angular quadrature of the intensity. In the present method, conservation of total mass, momentum, and energy of the radiation magnetofluids is guaranteed. We treat not only the isotropic scattering but also the Thomson scattering. The numerical method of MHDs is the same as that of our previous work. The advection terms are explicitlymore » solved, and the source terms, which describe the gas–radiation interaction, are implicitly integrated. Our code is suitable for massive parallel computing. We present that our code shows reasonable results in some numerical tests for propagating radiation and radiation hydrodynamics. Particularly, the correct solution is given even in the optically very thin or moderately thin regimes, and the special relativistic effects are nicely reproduced.« less
Benchmarking of Neutron Production of Heavy-Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in design and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
Benchmarking of Heavy Ion Transport Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remec, Igor; Ronningen, Reginald M.; Heilbronn, Lawrence
Accurate prediction of radiation fields generated by heavy ion interactions is important in medical applications, space missions, and in designing and operation of rare isotope research facilities. In recent years, several well-established computer codes in widespread use for particle and radiation transport calculations have been equipped with the capability to simulate heavy ion transport and interactions. To assess and validate these capabilities, we performed simulations of a series of benchmark-quality heavy ion experiments with the computer codes FLUKA, MARS15, MCNPX, and PHITS. We focus on the comparisons of secondary neutron production. Results are encouraging; however, further improvements in models andmore » codes and additional benchmarking are required.« less
Simulations of Laboratory Astrophysics Experiments using the CRASH code
NASA Astrophysics Data System (ADS)
Trantham, Matthew; Kuranz, Carolyn; Fein, Jeff; Wan, Willow; Young, Rachel; Keiter, Paul; Drake, R. Paul
2015-11-01
Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, magnetized flows, jets, and laser-produced plasmas. This work is funded by the following grants: DEFC52-08NA28616, DE-NA0001840, and DE-NA0002032.
SU-A-210-01: Why Should We Learn Radiation Oncology Billing?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, H.
The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less
Multi-Modal Traveler Information System - GCM Corridor Architecture Interface Control Requirements
DOT National Transportation Integrated Search
1997-10-31
The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...
Multi-Modal Traveler Information System - GCM Corridor Architecture Functional Requirements
DOT National Transportation Integrated Search
1997-11-17
The Multi-Modal Traveler Information System (MMTIS) project involves a large number of Intelligent Transportation System (ITS) related tasks. It involves research of all ITS initiatives in the Gary-Chicago-Milwaukee (GCM) Corridor which are currently...
Wang, H; Liu, C
2012-11-01
This meta-analysis investigated the association of C677T polymorphism in MTHFR gene with bone mineral density (BMD) and fracture risk. The results suggested that C677T polymorphism was marginally associated with fracture risk. In addition, this polymorphism was modestly associated with BMD of lumbar spine, femoral neck, total hip, and total body, respectively. The methylenetetrahydrofolate reductase (MTHFR) gene has been implicated in the regulation of BMD and, thus, may serve as a potential risk factor for the development of fracture. However, results have been inconsistent. In this study, a meta-analysis was performed to clarify the association of C677T polymorphism in MTHFR gene with BMD and fracture risk. Published literature from PubMed and EMBASE were searched for eligible publications. Pooled odds ratio (OR) or weighted mean difference (WMD) and 95% confidence interval (CI) were calculated using a fixed- or random-effects model. Twenty studies (3,525 cases and 17,909 controls) were included in this meta-analysis. The TT genotype of C677T polymorphism was marginally associated with an increased risk of fracture under recessive model (TT vs. TC + CC: OR = 1.23, 95% CI 1.04-1.47). Using this model, similar results were found among East Asians (OR = 1.40, 95% CI 1.07-1.83), female subpopulation (1.27, 95% CI 1.04-1.55), cohort studies (OR = 1.24, 95% CI 1.08-1.44), and subjects younger than aged 60 years (OR = 1.51, 95% CI 1.10-2.07). In addition, under homogeneous co-dominant model, there was a modest association of C677T polymorphism with BMD of lumbar spine (WMD = -0.017 g/cm(2); 95%CI, -0.030-(-0.005) g/cm(2)), femoral neck (WMD = -0.010 g/cm(2); 95% CI -0.017-(-0.003) g/cm(2)), total hip (WMD = -0.013 g/cm(2), 95% CI -0.022-(-0.004) g/cm(2)), and total body (WMD = -0.020 g/cm(2); 95% CI -0.027-(-0.013) g/cm(2)), respectively. This meta-analysis suggested that C677T polymorphism was marginally associated with fracture risk. In addition, this polymorphism was modestly associated with BMD of lumbar spine, femoral neck, total hip, and total body, respectively.
RADHOT: A Radiation Hydrodynamics Code for Weapon Effects Calculation.
1981-03-01
h4A ( :: [ l), t.110 )" *- 7470 -C - C... C LUMI1LTI A F ’ :: ISUfI ----- --------------- 7480= P2 GM I ’: ;,,l. II 7490C:, A ......... ’ R..E I:I ’ S...AD-AlO 637 AIR FORCE INST OF TECH WRIGHTPATTERSON AFL O SCHOOETC F /8 12/ RADHOT: A RADIATION HYDRODYNAMICS CODE FOR WEAPON EFFECTS CALCU--ETC(U...change of internal energy due to radiation atj rad F monochromatic flux V F -, F inward and outward-going monochromatic fluxes at Va cell boundary F -, F1
Radiative Transfer Modeling in Proto-planetary Disks
NASA Astrophysics Data System (ADS)
Kasper, David; Jang-Condell, Hannah; Kloster, Dylan
2016-01-01
Young Stellar Objects (YSOs) are rich astronomical research environments. Planets form in circumstellar disks of gas and dust around YSOs. With ever increasing capabilities of the observational instruments designed to look at these proto-planetary disks, most notably GPI, SPHERE, and ALMA, more accurate interfaces must be made to connect modeling of the disks with observation. PaRTY (Parallel Radiative Transfer in YSOs) is a code developed previously to model the observable density and temperature structure of such a disk by self-consistently calculating the structure of the disk based on radiative transfer physics. We present upgrades we are implementing to the PaRTY code to improve its accuracy and flexibility. These upgrades include: creating a two-sided disk model, implementing a spherical coordinate system, and implementing wavelength-dependent opacities. These upgrades will address problems in the PaRTY code of infinite optical thickness, calculation under/over-resolution, and wavelength-independent photon penetration depths, respectively. The upgraded code will be used to better model disk perturbations resulting from planet formation.
Modeling Laboratory Astrophysics Experiments using the CRASH code
NASA Astrophysics Data System (ADS)
Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team
2013-10-01
The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1994-01-01
A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.
The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.
2014-02-01
A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments ofmore » cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.« less