Sample records for simulation code cloudy

  1. Cloudy - simulating the non-equilibrium microphysics of gas and dust, and its observed spectrum

    NASA Astrophysics Data System (ADS)

    Ferland, Gary J.

    2014-01-01

    Cloudy is an open-source plasma/spectral simulation code, last described in the open-access journal Revista Mexicana (Ferland et al. 2013, 2013RMxAA..49..137F). The project goal is a complete simulation of the microphysics of gas and dust over the full range of density, temperature, and ionization that we encounter in astrophysics, together with a prediction of the observed spectrum. Cloudy is one of the more widely used theory codes in astrophysics with roughly 200 papers citing its documentation each year. It is developed by graduate students, postdocs, and an international network of collaborators. Cloudy is freely available on the web at trac.nublado.org, the user community can post questions on http://groups.yahoo.com/neo/groups/cloudy_simulations/info, and summer schools are organized to learn more about Cloudy and its use (http://cloud9.pa.uky.edu gary/cloudy/CloudySummerSchool/). The code’s widespread use is possible because of extensive automatic testing. It is exercised over its full range of applicability whenever the source is changed. Changes in predicted quantities are automatically detected along with any newly introduced problems. The code is designed to be autonomous and self-aware. It generates a report at the end of a calculation that summarizes any problems encountered along with suggestions of potentially incorrect boundary conditions. This self-monitoring is a core feature since the code is now often used to generate large MPI grids of simulations, making it impossible for a user to verify each calculation by hand. I will describe some challenges in developing a large physics code, with its many interconnected physical processes, many at the frontier of research in atomic or molecular physics, all in an open environment.

  2. Cloudy's Journey from FORTRAN to C, Why and How

    NASA Astrophysics Data System (ADS)

    Ferland, G. J.

    Cloudy is a large-scale plasma simulation code that is widely used across the astronomical community as an aid in the interpretation of spectroscopic data. The cover of the ADAS VI book featured predictions of the code. The FORTRAN 77 source code has always been freely available on the Internet, contributing to its widespread use. The coming of PCs and Linux has fundamentally changed the computing environment. Modern Fortran compilers (F90 and F95) are not freely available. A common-use code must be written in either FORTRAN 77 or C to be Open Source/GNU/Linux friendly. F77 has serious drawbacks - modern language constructs cannot be used, students do not have skills in this language, and it does not contribute to their future employability. It became clear that the code would have to be ported to C to have a viable future. I describe the approach I used to convert Cloudy from FORTRAN 77 with MILSPEC extensions to ANSI/ISO 89 C. Cloudy is now openly available as a C code, and will evolve to C++ as gcc and standard C++ mature. Cloudy looks to a bright future with a modern language.

  3. A CLOUDY/XSPEC Interface

    NASA Technical Reports Server (NTRS)

    Porter, R. L.; Ferland, G. J.; Kraemer, S. B.; Armentrout, B. K.; Arnaud, K. A.; Turner, T. J.

    2007-01-01

    We discuss new functionality of the spectral simulation code CLOUDY which allows the user to calculate grids with one or more initial parameters varied and formats the predicted spectra in the standard FITS format. These files can then be imported into the x-ray spectral analysis software XSPEC and used as theoretical models for observations. We present and verify a test case. Finally, we consider a few observations and discuss our results.

  4. Plasma simulations that meet the challenges of HST & JWST Active Nuclei & Starburst observations

    NASA Astrophysics Data System (ADS)

    Ferland, Gary

    2017-08-01

    Recent HST AGN monitoring programs, such as the STORM Campaign, have resulted in the definitive set of emission-line-continuum lag measurements. The goals are to measure the structure of the inner regions of an AGN, understand the physics driving the variability, and use this to place black hole mass determinations on an even firmer footing. Photoionization models make it possible to convert these observations into physical parameters such as cloud density or location. Here I propose to improve the treatment of emission from species like C IV, C III], Mg II, or Fe II in the spectral / plasma simulation code Cloudy. Like all plasma codes, Cloudy uses a modified two-level approximation to solve for the ionization of many-electron ions. I have participated in meetings on modeling Tokamak plasmas, which share many of the properties of the BLR of AGN and have the advantage of being a controlled laboratory environment. These discussions have led to the development of tests to show the density range over which the two-level approximation is valid. It fails at the densities where the strong UV lines form. I will use the atomic data available within the fusion modeling community, along with the methods they have developed, to improve Cloudy models so that they can better inform us of the message in the UV spectrum. The improvements will be part of future releases of Cloudy, which is openly available and updated on a regular basis.

  5. Cloudy 94 and Applications to Quasar Emission Line Regions

    NASA Technical Reports Server (NTRS)

    Ferland, Gary J.

    2000-01-01

    This review discusses the most recent developments of the plasma simulation code Cloudy and its application to the, emission-line regions of quasars. The longterm goal is to develop the tools needed to determine the chemical composition of the emitting gas and the luminosity of the central engine for any emission line source. Emission lines and the underlying thermal continuum are formed in plasmas that are far from thermodynamic equilibrium. Their thermal and ionization states are the result of a balance of a vast set of microphysical processes. Once produced, radiation must, propagate out of the (usually) optically thick source. No analytic solutions are possible, and recourse to numerical simulations is necessary. I am developing the large-scale plasma simulation code Cloudy as an investigative tool for this work, much as an observer might build a spectrometer. This review describes the current version of Cloudy, version 94. It describes improvements made since the, release of the previous version, C90. The major recent, application has been the development of the "Locally Optimally-Emitting Cloud" (LOC) model of AGN emission line regions. Powerful selection effects, introduced by the atomic physics and line formation process, permit individual lines to form most efficiently only near certain selected parameters. These selection effects, together with the presence of gas with a wide range of conditions, are enough to reproduce the spectrum of a typical quasar with little dependence on details. The spectrum actually carries little information to the identity of the emitters. I view this as a major step forward since it provides a method to handle accidental details at the source, so that we can concentrate on essential information such as the luminosity or chemical composition of the quasar.

  6. Studies in the parameterization of cloudiness in climate models and the analysis of radiation fields in general circulation models

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN

    1990-01-01

    Broad-band parameterizations for atmospheric radiative transfer were developed for clear and cloudy skies. These were in the shortwave and longwave regions of the spectrum. These models were compared with other models in an international effort called ICRCCM (Intercomparison of Radiation Codes for Climate Models). The radiation package developed was used for simulations of a General Circulation Model (GCM). A synopsis is provided of the research accomplishments in the two areas separately. Details are available in the published literature.

  7. FSFE: Fake Spectra Flux Extractor

    NASA Astrophysics Data System (ADS)

    Bird, Simeon

    2017-10-01

    The fake spectra flux extractor generates simulated quasar absorption spectra from a particle or adaptive mesh-based hydrodynamic simulation. It is implemented as a python module. It can produce both hydrogen and metal line spectra, if the simulation includes metals. The cloudy table for metal ionization fractions is included. Unlike earlier spectral generation codes, it produces absorption from each particle close to the sight-line individually, rather than first producing an average density in each spectral pixel, thus substantially preserving more of the small-scale velocity structure of the gas. The code supports both Gadget (ascl:0003.001) and AREPO.

  8. Performance analysis of a parallel Monte Carlo code for simulating solar radiative transfer in cloudy atmospheres using CUDA-enabled NVIDIA GPU

    NASA Astrophysics Data System (ADS)

    Russkova, Tatiana V.

    2017-11-01

    One tool to improve the performance of Monte Carlo methods for numerical simulation of light transport in the Earth's atmosphere is the parallel technology. A new algorithm oriented to parallel execution on the CUDA-enabled NVIDIA graphics processor is discussed. The efficiency of parallelization is analyzed on the basis of calculating the upward and downward fluxes of solar radiation in both a vertically homogeneous and inhomogeneous models of the atmosphere. The results of testing the new code under various atmospheric conditions including continuous singlelayered and multilayered clouds, and selective molecular absorption are presented. The results of testing the code using video cards with different compute capability are analyzed. It is shown that the changeover of computing from conventional PCs to the architecture of graphics processors gives more than a hundredfold increase in performance and fully reveals the capabilities of the technology used.

  9. The 2017 Release Cloudy

    NASA Astrophysics Data System (ADS)

    Ferland, G. J.; Chatzikos, M.; Guzmán, F.; Lykins, M. L.; van Hoof, P. A. M.; Williams, R. J. R.; Abel, N. P.; Badnell, N. R.; Keenan, F. P.; Porter, R. L.; Stancil, P. C.

    2017-10-01

    We describe the 2017 release of the spectral synthesis code Cloudy, summarizing the many improvements to the scope and accuracy of the physics which have been made since the previous release. Exporting the atomic data into external data files has enabled many new large datasets to be incorporated into the code. The use of the complete datasets is not realistic for most calculations, so we describe the limited subset of data used by default, which predicts significantly more lines than the previous release of Cloudy. This version is nevertheless faster than the previous release, as a result of code optimizations. We give examples of the accuracy limits using small models, and the performance requirements of large complete models. We summarize several advances in the H- and He-like iso-electronic sequences and use our complete collisional-radiative models to establish the densities where the coronal and local thermodynamic equilibrium approximations work.

  10. The Cloud Feedback Model Intercomparison Project Observational Simulator Package: Version 2

    NASA Astrophysics Data System (ADS)

    Swales, Dustin J.; Pincus, Robert; Bodas-Salcedo, Alejandro

    2018-01-01

    The Cloud Feedback Model Intercomparison Project Observational Simulator Package (COSP) gathers together a collection of observation proxies or satellite simulators that translate model-simulated cloud properties to synthetic observations as would be obtained by a range of satellite observing systems. This paper introduces COSP2, an evolution focusing on more explicit and consistent separation between host model, coupling infrastructure, and individual observing proxies. Revisions also enhance flexibility by allowing for model-specific representation of sub-grid-scale cloudiness, provide greater clarity by clearly separating tasks, support greater use of shared code and data including shared inputs across simulators, and follow more uniform software standards to simplify implementation across a wide range of platforms. The complete package including a testing suite is freely available.

  11. Ultrafast High Accuracy PCRTM_SOLAR Model for Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Yang, Ping; Wang, Chenxi

    2015-01-01

    An ultrafast high accuracy PCRTM_SOLAR model is developed based on PCA compression and principal component-based radiative transfer model (PCRTM). A fast algorithm for simulation of multi-scattering properties of cloud and/or aerosols is integrated into the fast infrared PCRTM. We completed radiance simulation and training for instruments, such as IASI, AIRS, CrIS, NASTI and SHIS, under diverse conditions. The new model is 5 orders faster than 52-stream DISORT with very high accuracy for cloudy sky radiative transfer simulation. It is suitable for hyperspectral remote data assimilation and cloudy sky retrievals.

  12. Simulations of cloudy hyperspectral infrared radiances using the HT-FRTC, a fast PC-based multipurpose radiative transfer code

    NASA Astrophysics Data System (ADS)

    Havemann, S.; Aumann, H. H.; Desouza-Machado, S. G.

    2017-12-01

    The HT-FRTC uses principal components which cover the spectrum at a very high spectral resolution allowing very fast line-by-line-like, hyperspectral and broadband simulations for satellite-based, airborne and ground-based sensors. Using data from IASI and from the Airborne Research Interferometer Evaluation System (ARIES) on board the FAAM BAE 146 aircraft, variational retrievals in principal component space with HT-FRTC as forward model have demonstrated that valuable information on temperature and humidity profiles and on the cirrus cloud properties can be obtained simultaneously. The NASA/JPL/UMBC cloudy RTM inter-comparison project has been working on a global dataset consisting of 7377 AIRS spectra. Initial simulations with HT-FRTC for this dataset have been promising. A next step taken here is to investigate how sensitive the results are with respect to different assumptions in the cloud modelling. One aspect of this is to study how assumptions about the microphysical and related optical properties of liquid/ice clouds impact the statistics of the agreement between model and observations. The other aspect is about the cloud overlap scheme. Different schemes have been tested (maximum, random, maximum random). As the computational cost increases linearly with the number of cloud columns, it will be investigated if there is an optimal number of columns beyond which there is little additional benefit to be gained. During daytime the high wave number channels of AIRS are affected by solar radiation. With full scattering calculations using a monochromatic version of the Edwards-Slingo radiation code the HT-FRTC can model solar radiation reasonably well, but full scattering calculations are relatively expensive. Pure Chou scaling on the other hand can not properly describe scattering of solar radiation by clouds and requires additional refinements.

  13. Systematic Comparison of Photoionized Plasma Codes with Application to Spectroscopic Studies of AGN in X-Rays

    NASA Technical Reports Server (NTRS)

    Mehdipour, M.; Kaastra, J. S.; Kallman, T.

    2016-01-01

    Atomic data and plasma models play a crucial role in the diagnosis and interpretation of astrophysical spectra, thus influencing our understanding of the Universe. In this investigation we present a systematic comparison of the leading photoionization codes to determine how much their intrinsic differences impact X-ray spectroscopic studies of hot plasmas in photoionization equilibrium. We carry out our computations using the Cloudy, SPEX, and XSTAR photoionization codes, and compare their derived thermal and ionization states for various ionizing spectral energy distributions. We examine the resulting absorption-line spectra from these codes for the case of ionized outflows in active galactic nuclei. By comparing the ionic abundances as a function of ionization parameter, we find that on average there is about 30 deviation between the codes in where ionic abundances peak. For H-like to B-like sequence ions alone, this deviation in is smaller at about 10 on average. The comparison of the absorption-line spectra in the X-ray band shows that there is on average about 30 deviation between the codes in the optical depth of the lines produced at log 1 to 2, reducing to about 20 deviation at log 3. We also simulate spectra of the ionized outflows with the current and upcoming high-resolution X-ray spectrometers, on board XMM-Newton, Chandra, Hitomi, and Athena. From these simulations we obtain the deviation on the best-fit model parameters, arising from the use of different photoionization codes, which is about 10 to40. We compare the modeling uncertainties with the observational uncertainties from the simulations. The results highlight the importance of continuous development and enhancement of photoionization codes for the upcoming era of X-ray astronomy with Athena.

  14. A Fast Visible-Infrared Imaging Radiometer Suite Simulator for Cloudy Atmopheres

    NASA Technical Reports Server (NTRS)

    Liu, Chao; Yang, Ping; Nasiri, Shaima L.; Platnick, Steven; Meyer, Kerry G.; Wang, Chen Xi; Ding, Shouguo

    2015-01-01

    A fast instrument simulator is developed to simulate the observations made in cloudy atmospheres by the Visible Infrared Imaging Radiometer Suite (VIIRS). The correlated k-distribution (CKD) technique is used to compute the transmissivity of absorbing atmospheric gases. The bulk scattering properties of ice clouds used in this study are based on the ice model used for the MODIS Collection 6 ice cloud products. Two fast radiative transfer models based on pre-computed ice cloud look-up-tables are used for the VIIRS solar and infrared channels. The accuracy and efficiency of the fast simulator are quantify in comparison with a combination of the rigorous line-by-line (LBLRTM) and discrete ordinate radiative transfer (DISORT) models. Relative errors are less than 2 for simulated TOA reflectances for the solar channels and the brightness temperature differences for the infrared channels are less than 0.2 K. The simulator is over three orders of magnitude faster than the benchmark LBLRTM+DISORT model. Furthermore, the cloudy atmosphere reflectances and brightness temperatures from the fast VIIRS simulator compare favorably with those from VIIRS observations.

  15. A Framework for Cloudy Model Optimization and Database Storage

    NASA Astrophysics Data System (ADS)

    Calvén, Emilia; Helton, Andrew; Sankrit, Ravi

    2018-01-01

    We present a framework for producing Cloudy photoionization models of the nebular emission from novae ejecta and storing a subset of the results in SQL database format for later usage. The database can be searched for models best fitting observed spectral line ratios. Additionally, the framework includes an optimization feature that can be used in tandem with the database to search for and improve on models by creating new Cloudy models while, varying the parameters. The database search and optimization can be used to explore the structures of nebulae by deriving their properties from the best-fit models. The goal is to provide the community with a large database of Cloudy photoionization models, generated from parameters reflecting conditions within novae ejecta, that can be easily fitted to observed spectral lines; either by directly accessing the database using the framework code or by usage of a website specifically made for this purpose.

  16. A STUDY OF THE X-RAYED OUTFLOW OF APM 08279+5255 THROUGH PHOTOIONIZATION CODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saez, Cristian; Chartas, George, E-mail: saez@astro.psu.edu, E-mail: chartasg@cofc.edu

    2011-08-20

    We present new results from our study of the X-rayed outflow of the z = 3.91 gravitationally lensed broad absorption line quasar APM 08279+5255. These results are based on spectral fits to all the long exposure observations of APM 08279+5255 using a new quasar-outflow model. This model is based on CLOUDY{sup 3} CLOUDY is a photoionization code designed to simulate conditions in interstellar matter under a broad range of conditions. We have used version 08.00 of the code last described by Ferland et al. (1998). The atomic database used by CLOUDY is described in Ferguson et al. (2001) and http://www.pa.uky.edu/{approx}verner/atom.html.more » simulations of a near-relativistic quasar outflow. The main conclusions from our multi-epoch spectral re-analysis of Chandra, XMM-Newton, and Suzaku observations of APM 08279+5255 are the following. (1) In every observation, we confirm the presence of two strong features, one at rest-frame energies between 1-4 keV and the other between 7-18 keV. (2) We confirm that the low-energy absorption (1-4 keV rest frame) arises from a low-ionization absorber with log(N{sub H}/cm{sup -2}) {approx} 23 and the high-energy absorption (7-18 keV rest frame) arises from highly ionized (3 {approx}< log {xi} {approx}< 4, where {xi} is the ionization parameter) iron in a near-relativistic outflowing wind. Assuming this interpretation, we find that the velocities on the outflow could get up to {approx}0.7c. (3) We confirm a correlation between the maximum outflow velocity and the photon index and find possible trends between the maximum outflow velocity and the X-ray luminosity, and between the total column density and the photon index. We performed calculations of the force multipliers of material illuminated by absorbed power laws and a Mathews-Ferland spectral energy distribution (SED). We found that variations of the X-ray and UV parts of the SEDs and the presence of a moderate absorbing shield will produce important changes in the strength of the radiative driving force. These results support the observed trend found between the outflow velocity and X-ray photon index in APM 08279+5255. If this result is confirmed it will imply that radiation pressure is an important mechanism in producing quasar outflows.« less

  17. Insights into low-latitude cloud feedbacks from high-resolution models.

    PubMed

    Bretherton, Christopher S

    2015-11-13

    Cloud feedbacks are a leading source of uncertainty in the climate sensitivity simulated by global climate models (GCMs). Low-latitude boundary-layer and cumulus cloud regimes are particularly problematic, because they are sustained by tight interactions between clouds and unresolved turbulent circulations. Turbulence-resolving models better simulate such cloud regimes and support the GCM consensus that they contribute to positive global cloud feedbacks. Large-eddy simulations using sub-100 m grid spacings over small computational domains elucidate marine boundary-layer cloud response to greenhouse warming. Four observationally supported mechanisms contribute: 'thermodynamic' cloudiness reduction from warming of the atmosphere-ocean column, 'radiative' cloudiness reduction from CO2- and H2O-induced increase in atmospheric emissivity aloft, 'stability-induced' cloud increase from increased lower tropospheric stratification, and 'dynamical' cloudiness increase from reduced subsidence. The cloudiness reduction mechanisms typically dominate, giving positive shortwave cloud feedback. Cloud-resolving models with horizontal grid spacings of a few kilometres illuminate how cumulonimbus cloud systems affect climate feedbacks. Limited-area simulations and superparameterized GCMs show upward shift and slight reduction of cloud cover in a warmer climate, implying positive cloud feedbacks. A global cloud-resolving model suggests tropical cirrus increases in a warmer climate, producing positive longwave cloud feedback, but results are sensitive to subgrid turbulence and ice microphysics schemes. © 2015 The Author(s).

  18. Meeting the Challenge of Webb - Spectroscopic Simulations of Star-Forming Regions and Active Galactic Nuclei

    NASA Astrophysics Data System (ADS)

    Ferland, Gary

    Understanding the chemical evolution of the universe, together with closely related questions concerning the formation of cosmic structure, is a major theme running across current astrophysics. The James Webb Space Telescope (JWST) will offer a unique perspective on this activity, with its high sensitivity and superb resolution. Basic questions include the role of feedback in the formation and evolution of galaxies, interactions between the AGN and the surrounding intracluster medium, and their effects on the metagalactic background. The central theme in this proposal is the development of the theoretical tools needed to realize the diagnostic potential of the 0.6 to 5 micron NIRSpec and 5 to 28 micron MIRI spectroscopic windows offered by JWST, with correspondingly shorter wavelengths at higher redshift. The particular regimes to be addressed include ionic and molecular emission in an evolving environment with a mix of star formation and AGN activity, the physics of dust emission in gas-rich surroundings, in environments that are optically thick to portions of the radiation field. The gas and dust are far from equilibrium, so their spectra depend on detailed atomic and molecular physics. This is a complication, but is also why quantitative spectroscopy reveals so much about the emitting environment. This project supports the development and application of the spectral synthesis code Cloudy. Cloudy is designed to solve the coupled plasma, chemistry, radiation transport, and dynamics problems simultaneously and self consistently, building from a foundation of ab initio atomic and molecular cross sections and rate coefficients. By treating the microphysics without compromise, the macrophysics, including the observed spectrum, will be correct. This makes the code suitable for application to a very wide range of astronomical problems, ranging from the intracluster medium in cool-core clusters, to the innermost regions of an AGN, including the accretion disk and molecular torus. It treats the full range of physical state, from fully ionized to molecular, that JWST will study. All this is done self-consistently with a minimum of free parameters. Cloudy is openly available with its documentation being cited by roughly 200 papers per year. This open access and widespread applicability ensures that the results produced by this project will see broad application. These improvements will facilitate community use of Cloudy in such diverse phenomena as starburst galaxies, gamma ray bursts, and the intergalactic medium, over the spectral bands JWST will cover.

  19. Assessing 1D Atmospheric Solar Radiative Transfer Models: Interpretation and Handling of Unresolved Clouds.

    NASA Astrophysics Data System (ADS)

    Barker, H. W.; Stephens, G. L.; Partain, P. T.; Bergman, J. W.; Bonnel, B.; Campana, K.; Clothiaux, E. E.; Clough, S.; Cusack, S.; Delamere, J.; Edwards, J.; Evans, K. F.; Fouquart, Y.; Freidenreich, S.; Galin, V.; Hou, Y.; Kato, S.; Li, J.;  Mlawer, E.;  Morcrette, J.-J.;  O'Hirok, W.;  Räisänen, P.;  Ramaswamy, V.;  Ritter, B.;  Rozanov, E.;  Schlesinger, M.;  Shibata, K.;  Sporyshev, P.;  Sun, Z.;  Wendisch, M.;  Wood, N.;  Yang, F.

    2003-08-01

    The primary purpose of this study is to assess the performance of 1D solar radiative transfer codes that are used currently both for research and in weather and climate models. Emphasis is on interpretation and handling of unresolved clouds. Answers are sought to the following questions: (i) How well do 1D solar codes interpret and handle columns of information pertaining to partly cloudy atmospheres? (ii) Regardless of the adequacy of their assumptions about unresolved clouds, do 1D solar codes perform as intended?One clear-sky and two plane-parallel, homogeneous (PPH) overcast cloud cases serve to elucidate 1D model differences due to varying treatments of gaseous transmittances, cloud optical properties, and basic radiative transfer. The remaining four cases involve 3D distributions of cloud water and water vapor as simulated by cloud-resolving models. Results for 25 1D codes, which included two line-by-line (LBL) models (clear and overcast only) and four 3D Monte Carlo (MC) photon transport algorithms, were submitted by 22 groups. Benchmark, domain-averaged irradiance profiles were computed by the MC codes. For the clear and overcast cases, all MC estimates of top-of-atmosphere albedo, atmospheric absorptance, and surface absorptance agree with one of the LBL codes to within ±2%. Most 1D codes underestimate atmospheric absorptance by typically 15-25 W m-2 at overhead sun for the standard tropical atmosphere regardless of clouds.Depending on assumptions about unresolved clouds, the 1D codes were partitioned into four genres: (i) horizontal variability, (ii) exact overlap of PPH clouds, (iii) maximum/random overlap of PPH clouds, and (iv) random overlap of PPH clouds. A single MC code was used to establish conditional benchmarks applicable to each genre, and all MC codes were used to establish the full 3D benchmarks. There is a tendency for 1D codes to cluster near their respective conditional benchmarks, though intragenre variances typically exceed those for the clear and overcast cases. The majority of 1D codes fall into the extreme category of maximum/random overlap of PPH clouds and thus generally disagree with full 3D benchmark values. Given the fairly limited scope of these tests and the inability of any one code to perform extremely well for all cases begs the question that a paradigm shift is due for modeling 1D solar fluxes for cloudy atmospheres.

  20. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds

    NASA Astrophysics Data System (ADS)

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.

    2012-11-01

    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  1. SLUG - stochastically lighting up galaxies - III. A suite of tools for simulated photometry, spectroscopy, and Bayesian inference with stochastic stellar populations

    NASA Astrophysics Data System (ADS)

    Krumholz, Mark R.; Fumagalli, Michele; da Silva, Robert L.; Rendahl, Theodore; Parra, Jonathan

    2015-09-01

    Stellar population synthesis techniques for predicting the observable light emitted by a stellar population have extensive applications in numerous areas of astronomy. However, accurate predictions for small populations of young stars, such as those found in individual star clusters, star-forming dwarf galaxies, and small segments of spiral galaxies, require that the population be treated stochastically. Conversely, accurate deductions of the properties of such objects also require consideration of stochasticity. Here we describe a comprehensive suite of modular, open-source software tools for tackling these related problems. These include the following: a greatly-enhanced version of the SLUG code introduced by da Silva et al., which computes spectra and photometry for stochastically or deterministically sampled stellar populations with nearly arbitrary star formation histories, clustering properties, and initial mass functions; CLOUDY_SLUG, a tool that automatically couples SLUG-computed spectra with the CLOUDY radiative transfer code in order to predict stochastic nebular emission; BAYESPHOT, a general-purpose tool for performing Bayesian inference on the physical properties of stellar systems based on unresolved photometry; and CLUSTER_SLUG and SFR_SLUG, a pair of tools that use BAYESPHOT on a library of SLUG models to compute the mass, age, and extinction of mono-age star clusters, and the star formation rate of galaxies, respectively. The latter two tools make use of an extensive library of pre-computed stellar population models, which are included in the software. The complete package is available at http://www.slugsps.com.

  2. A synthetic data set of high-spectral-resolution infrared spectra for the Arctic atmosphere

    NASA Astrophysics Data System (ADS)

    Cox, Christopher J.; Rowe, Penny M.; Neshyba, Steven P.; Walden, Von P.

    2016-05-01

    Cloud microphysical and macrophysical properties are critical for understanding the role of clouds in climate. These properties are commonly retrieved from ground-based and satellite-based infrared remote sensing instruments. However, retrieval uncertainties are difficult to quantify without a standard for comparison. This is particularly true over the polar regions, where surface-based data for a cloud climatology are sparse, yet clouds represent a major source of uncertainty in weather and climate models. We describe a synthetic high-spectral-resolution infrared data set that is designed to facilitate validation and development of cloud retrieval algorithms for surface- and satellite-based remote sensing instruments. Since the data set is calculated using pre-defined cloudy atmospheres, the properties of the cloud and atmospheric state are known a priori. The atmospheric state used for the simulations is drawn from radiosonde measurements made at the North Slope of Alaska (NSA) Atmospheric Radiation Measurement (ARM) site at Barrow, Alaska (71.325° N, 156.615° W), a location that is generally representative of the western Arctic. The cloud properties for each simulation are selected from statistical distributions derived from past field measurements. Upwelling (at 60 km) and downwelling (at the surface) infrared spectra are simulated for 260 cloudy cases from 50 to 3000 cm-1 (3.3 to 200 µm) at monochromatic (line-by-line) resolution at a spacing of ˜ 0.01 cm-1 using the Line-by-line Radiative Transfer Model (LBLRTM) and the discrete-ordinate-method radiative transfer code (DISORT). These spectra are freely available for interested researchers from the NSF Arctic Data Center data repository (doi:10.5065/D61J97TT).

  3. One year of downwelling spectral radiance measurements from 100 to 1400 cm-1 at Dome Concordia: Results in clear conditions

    NASA Astrophysics Data System (ADS)

    Rizzi, R.; Arosio, C.; Maestri, T.; Palchetti, L.; Bianchini, G.; Del Guasta, M.

    2016-09-01

    The present work examines downwelling radiance spectra measured at the ground during 2013 by a Far Infrared Fourier Transform Spectrometer at Dome C, Antarctica. A tropospheric backscatter and depolarization lidar is also deployed at same site, and a radiosonde system is routinely operative. The measurements allow characterization of the water vapor and clouds infrared properties in Antarctica under all sky conditions. In this paper we specifically discuss cloud detection and the analysis in clear sky condition, required for the discussion of the results obtained in cloudy conditions. First, the paper discusses the procedures adopted for the quality control of spectra acquired automatically. Then it describes the classification procedure used to discriminate spectra measured in clear sky from cloudy conditions. Finally a selection is performed and 66 clear cases, spanning the whole year, are compared to simulations. The computation of layer molecular optical depth is performed with line-by-line techniques and a convolution to simulate the Radiation Explorer in the Far InfraRed-Prototype for Applications and Development (REFIR-PAD) measurements; the downwelling radiance for selected clear cases is computed with a state-of-the-art adding-doubling code. The mean difference over all selected cases between simulated and measured radiance is within experimental error for all the selected microwindows except for the negative residuals found for all microwindows in the range 200 to 400 cm-1, with largest values around 295.1 cm-1. The paper discusses possible reasons for the discrepancy and identifies the incorrect magnitude of the water vapor total absorption coefficient as the cause of such large negative radiance bias below 400 cm-1.

  4. 3D Cloud Radiative Effects on Aerosol Optical Thickness Retrievals in Cumulus Cloud Fields in the Biomass Burning Region in Brazil

    NASA Technical Reports Server (NTRS)

    Wen, Guo-Yong; Marshak, Alexander; Cahalan, Robert F.

    2004-01-01

    Aerosol amount in clear regions of a cloudy atmosphere is a critical parameter in studying the interaction between aerosols and clouds. Since the global cloud cover is about 50%, cloudy scenes are often encountered in any satellite images. Aerosols are more or less transparent, while clouds are extremely reflective in the visible spectrum of solar radiation. The radiative transfer in clear-cloudy condition is highly three- dimensional (3D). This paper focuses on estimating the 3D effects on aerosol optical thickness retrievals using Monte Carlo simulations. An ASTER image of cumulus cloud fields in the biomass burning region in Brazil is simulated in this study. The MODIS products (i-e., cloud optical thickness, particle effective radius, cloud top pressure, surface reflectance, etc.) are used to construct the cloud property and surface reflectance fields. To estimate the cloud 3-D effects, we assume a plane-parallel stratification of aerosol properties in the 60 km x 60 km ASTER image. The simulated solar radiation at the top of the atmosphere is compared with plane-parallel calculations. Furthermore, the 3D cloud radiative effects on aerosol optical thickness retrieval are estimated.

  5. Observation of the spectrally invariant properties of clouds in cloudy-to-clear transition zones during the MAGIC field campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Weidong; Marshak, Alexander; McBride, Patrick J.

    2016-12-01

    We use the spectrally invariant method to study the variability of cloud optical thickness τ and droplet effective radius reff in transition zones (between the cloudy and clear sky columns) observed from Solar Spectral Flux Radiometer (SSFR) and Shortwave Array Spectroradiometer-Zenith (SASZe) during the Marine ARM GPCI Investigation of Clouds (MAGIC) field campaign. The measurements from the SSFR and the SASZe are different, however inter-instrument differences of self-normalized measurements (divided by their own spectra at a fixed time) are small. The spectrally invariant method approximates the spectra in the cloud transition zone as a linear combination of definitely clear andmore » cloudy spectra, where the coefficients, slope and intercept, character-ize the spectrally invariant properties of the transition zone. Simulation results from the SBDART (Santa Barbara DISORT Atmospheric Radiative Transfer) model demonstrate that (1) the slope of the visible band is positively correlated with the cloud optical thickness τ while the intercept of the near-infrared band has high negative cor-relation with the cloud drop effective radius reff even without the exact knowledge of τ; (2) the above relations hold for all Solar Zenith Angle (SZA) and for cloud-contaminated skies. In observations using redundant measure-ments from SSFR and SASZe, we find that during cloudy-to-clear transitions, (a) the slopes of the visible band de-crease, and (b) the intercepts of the near-infrared band remain almost constant near cloud edges. The findings in simulations and observations suggest that, while the optical thickness decreases during the cloudy-to-clear transition, the cloud drop effective radius does not change when cloud edges are approached. These results sup-port the hypothesis that inhomogeneous mixing dominates near cloud edges in the studied cases.« less

  6. Observation of the spectrally invariant properties of clouds in cloudy-to-clear transition zones during the MAGIC field campaign

    DOE PAGES

    Yang, Weidong; Marshak, Alexander; McBride, Patrick J.; ...

    2016-08-11

    We use the spectrally invariant method to study the variability of cloud optical thickness τ and droplet effective radius r eff in transition zones (between the cloudy and clear sky columns) observed from Solar Spectral Flux Radiometer (SSFR) and Shortwave Array Spectroradiometer-Zenith (SASZe) during the Marine ARM GPCI Investigation of Clouds (MAGIC) field campaign. The measurements from the SSFR and the SASZe are different, however inter-instrument differences of self-normalized measurements (divided by their own spectra at a fixed time) are small. The spectrally invariant method approximates the spectra in the cloud transition zone as a linear combination of definitely clearmore » and cloudy spectra, where the coefficients, slope and intercept, characterize the spectrally invariant properties of the transition zone. Simulation results from the SBDART (Santa Barbara DISORT Atmospheric Radiative Transfer) model demonstrate that (1) the slope of the visible band is positively correlated with the cloud optical thickness τ while the intercept of the near-infrared band has high negative correlation with the cloud drop effective radius r eff even without the exact knowledge of τ; (2) the above relations hold for all Solar Zenith Angle (SZA) and for cloud-contaminated skies. In observations using redundant measurements from SSFR and SASZe, we find that during cloudy-to-clear transitions, (a) the slopes of the visible band decrease, and (b) the intercepts of the near-infrared band remain almost constant near cloud edges. The findings in simulations and observations suggest that, while the optical thickness decreases during the cloudy-to-clear transition, the cloud drop effective radius does not change when cloud edges are approached. Furthermore, these results support the hypothesis that inhomogeneous mixing dominates near cloud edges in the studied cases.« less

  7. Observation of the Spectrally Invariant Properties of Clouds in Cloudy-to-Clear Transition Zones During the MAGIC Field Campaign

    NASA Technical Reports Server (NTRS)

    Yang, Weidong; Marshak, Alexander; McBride, Patrick; Chiu, J. Christine; Knyazikhin, Yuri; Schmidt, K. Sebastian; Flynn, Connor; Lewis, Ernie R.; Eloranta, Edwin W.

    2016-01-01

    We use the spectrally invariant method to study the variability of cloud optical thickness tau and droplet effective radius r(sub eff) in transition zones (between the cloudy and clear sky columns) observed from Solar Spectral Flux Radiometer (SSFR) and Shortwave Array Spectroradiometer-Zenith (SASZe) during the Marine ARM GPCI Investigation of Clouds (MAGIC) field campaign. The measurements from the SSFR and the SASZe are different, however inter-instrument differences of self-normalized measurements (divided by their own spectra at a fixed time) are small. The spectrally invariant method approximates the spectra in the cloud transition zone as a linear combination of definitely clear and cloudy spectra, where the coefficients, slope and intercept, characterize the spectrally invariant properties of the transition zone. Simulation results from the SBDART (Santa Barbara DISORT Atmospheric Radiative Transfer) model demonstrate that (1) the slope of the visible band is positively correlated with the cloud optical thickness t while the intercept of the near-infrared band has high negative correlation with the cloud drop effective radius r(sub eff)even without the exact knowledge of tau; (2) the above relations hold for all Solar Zenith Angle (SZA) and for cloud-contaminated skies. In observations using redundant measurements from SSFR and SASZe, we find that during cloudy-to-clear transitions, (a) the slopes of the visible band decrease, and (b) the intercepts of the near-infrared band remain almost constant near cloud edges. The findings in simulations and observations suggest that, while the optical thickness decreases during the cloudy-to-clear transition, the cloud drop effective radius does not change when cloud edges are approached. These results support the hypothesis that inhomogeneous mixing dominates near cloud edges in the studied cases.

  8. The effect of clouds on the earth's radiation balance

    NASA Technical Reports Server (NTRS)

    Herman, G. F.; Wu, M. L. C.; Johnson, W. T.

    1979-01-01

    The effect of global cloudiness on the radiation balance at the top of the atmosphere is studied in general circulation model experiments. Wintertime simulations were conducted with clouds that had realistic optical properties, and were compared with simulations in which the clouds were transparent to either solar or thermal radiation. Clouds increase the net balance by limiting longwave loss to space, but decrease it by reflecting solar radiation. It is found that the net result of cloudiness is to maintain net radiation which is less than would be realized under clear conditions: Clouds cause the net radiation at the top of the atmosphere to increase due to longwave absorption, but to decrease even more due to cloud reflectance of solar radiation.

  9. The Exoplanet Cloud Atlas

    NASA Astrophysics Data System (ADS)

    Gao, Peter; Marley, Mark S.; Morley, Caroline; Fortney, Jonathan J.

    2017-10-01

    Clouds have been readily inferred from observations of exoplanet atmospheres, and there exists great variability in cloudiness between planets, such that no clear trend in exoplanet cloudiness has so far been discerned. Equilibrium condensation calculations suggest a myriad of species - salts, sulfides, silicates, and metals - could condense in exoplanet atmospheres, but how they behave as clouds is uncertain. The behavior of clouds - their formation, evolution, and equilibrium size distribution - is controlled by cloud microphysics, which includes processes such as nucleation, condensation, and evaporation. In this work, we explore the cloudy exoplanet phase space by using a cloud microphysics model to simulate a suite of cloud species ranging from cooler condensates such as KCl/ZnS, to hotter condensates like perovskite and corundum. We investigate how the cloudiness and cloud particle sizes of exoplanets change due to variations in temperature, metallicity, gravity, and cloud formation mechanisms, and how these changes may be reflected in current and future observations. In particular, we will evaluate where in phase space could cloud spectral features be observable using JWST MIRI at long wavelengths, which will be dependent on the cloud particle size distribution and cloud species.

  10. Overlap Properties of Clouds Generated by a Cloud Resolving Model

    NASA Technical Reports Server (NTRS)

    Oreopoulos, L.; Khairoutdinov, M.

    2002-01-01

    In order for General Circulation Models (GCMs), one of our most important tools to predict future climate, to correctly describe the propagation of solar and thermal radiation through the cloudy atmosphere a realistic description of the vertical distribution of cloud amount is needed. Actually, one needs not only the cloud amounts at different levels of the atmosphere, but also how these cloud amounts are related, in other words, how they overlap. Currently GCMs make some idealized assumptions about cloud overlap, for example that contiguous cloud layers overlap maximally and non-contiguous cloud layers overlap in a random fashion. Since there are difficulties in obtaining the vertical profile of cloud amount from observations, the realism of the overlap assumptions made in GCMs has not been yet rigorously investigated. Recently however, cloud observations from a relatively new type of ground radar have been used to examine the vertical distribution of cloudiness. These observations suggest that the GCM overlap assumptions are dubious. Our study uses cloud fields from sophisticated models dedicated to simulate cloud formation, maintenance, and dissipation called Cloud Resolving Models . These models are generally considered capable of producing realistic three-dimensional representation of cloudiness. Using numerous cloud fields produced by such a CRM we show that the degree of overlap between cloud layers is a function of their separation distance, and is in general described by a combination of the maximum and random overlap assumption, with random overlap dominating as separation distances increase. We show that it is possible to parameterize this behavior in a way that can eventually be incorporated in GCMs. Our results seem to have a significant resemblance to the results from the radar observations despite the completely different nature of the datasets. This consistency is encouraging and will promote development of new radiative transfer codes that will estimate the radiation effects of multi-layer cloud fields more accurately.

  11. The radiative impact of cumulus cloudiness in a general circulation model

    NASA Technical Reports Server (NTRS)

    Moeng, C. H.; Randall, D. A.

    1982-01-01

    The effect of cumulus cloudiness on the radiational heating and, on other aspects of the climate were simulated by the GLAS Climate Model. An experiment in which the cumulus cloudiness is neglected completely for purposes of the solar and terrestrial radiation parameterizations was performed. The results are compared with those of a control run, in which 100% cumulus cloud cover is assumed. The net solar radiation input into the Earth atmosphere system is more realistic in the experiment, and the model's underprediction of the global mean outgoing thermal radiation at the top of the atmosphere is reduced. The results suggest that there is a positive feedback between cumulus convection and the radiation field. The upper troposphere is warmer in the experiment, the surface air temperature increases over land, and the thermal lows over the continents intensity.

  12. Cirrus and Water Vapor Transport in the Tropical Tropopause Layer

    NASA Astrophysics Data System (ADS)

    Dinh, Tra Phuong

    Simulations of tropical-tropopause-layer (TTL) cirrus under the influence of a large-scale equatorial Kelvin wave have been performed in two dimensions. These simulations show that, even under the influence of the large-scale wave, radiatively induced dynamics in TTL cirrus plays an important role in the transport of water vapor in the vertical direction. In a typical TTL cirrus, the heating that results from absorption of radiation by ice crystals induces a mesoscale circulation. Advection of ice and water vapor by the radiatively induced circulation leads to the persistence of the cloud and upward advection of the cloudy air. Upward advection of the cloudy air is equivalent to upward transport of water vapor when the air above the cloud is drier than the cloudy air, and downward transport otherwise. In TTL cirrus, microphysical processes also contribute to transport of water vapor in the vertical direction. Ice nucleation and growth, followed by sedimentation and sublimation, always lead to downward transport of water vapor. The magnitude of the downward transport by microphysical processes increases with the relative humidity of the air surrounding the cloud. Moisture in the surrounding environment is important because there is continuous interactions between the cloudy and environmental air throughout the cloud boundary. In our simulations, when the air surrounding the cloud is subsaturated, hence drier than the cloudy air, the magnitude of the downward transport due to microphysical processes is smaller than that of the upward transport due to the radiatively induced advection of water vapor. The net result is upward transport of water vapor, and equivalently hydration of the lower stratosphere. On the other hand, when the surrounding air is supersaturated, hence moister than the cloudy air, microphysical and radiatively induced dynamical processes work in concert to induce downward transport of water vapor, that is dehydration of the lower stratosphere. TTL cirrus processes also depend sensitively on the deposition coefficient of water vapor on ice crystals. The deposition coefficient determines the depositional growth rate of ice crystals, hence microphysical and radiative properties of the cloud. In our simulations, larger values of the deposition coefficient correspond to less ice crystals nucleated during homogeneous freezing, larger ice crystal sizes, faster ice sedimentation, smaller radiative heating rate and weaker dynamics. These results indicate that detailed observations of the relative humidity in the vicinity of TTL cirrus and accurate laboratory measurements of the deposition coefficient are necessary to quantify the impact of TTL cirrus in the dehydration of the stratosphere. This research highlights the complex role of microphysical, radiative and dynamical processes in the transport of water vapor within TTL cirrus. It shows that under certain realistic conditions, TTL cirrus may lead to upward transport of water vapor, which results in moistening of the lower stratosphere. Thus it is not accurate to always associate TTL cirrus with stratospheric dehydration.

  13. High redshift quasars and high metallicities

    NASA Technical Reports Server (NTRS)

    Ferland, Gary J.

    1997-01-01

    A large-scale code called Cloudy was designed to simulate non-equilibrium plasmas and predict their spectra. The goal was to apply it to studies of galactic and extragalactic emission line objects in order to reliably deduce abundances and luminosities. Quasars are of particular interest because they are the most luminous objects in the universe and the highest redshift objects that can be observed spectroscopically, and their emission lines can reveal the composition of the interstellar medium (ISM) of the universe when it was well under a billion years old. The lines are produced by warm (approximately 10(sup 4)K) gas with moderate to low density (n less than or equal to 10(sup 12) cm(sup -3)). Cloudy has been extended to include approximately 10(sup 4) resonance lines from the 495 possible stages of ionization of the lightest 30 elements, an extension that required several steps. The charge transfer database was expanded to complete the needed reactions between hydrogen and the first four ions and fit all reactions with a common approximation. Radiative recombination rate coefficients were derived for recombination from all closed shells, where this process should dominate. Analytical fits to Opacity Project (OP) and other recent photoionization cross sections were produced. Finally, rescaled OP oscillator strengths were used to compile a complete set of data for 5971 resonance lines. The major discovery has been that high redshift quasars have very high metallicities and there is strong evidence that the quasar phenomenon is associated with the birth of massive elliptical galaxies.

  14. The Validity of 21 cm Spin Temperature as a Kinetic Temperature Indicator in Atomic and Molecular Gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Gargi; Ferland, G. J.; Hubeny, I., E-mail: gargishaw@gmail.com, E-mail: gary@uky.edu, E-mail: hubeny@as.arizona.edu

    The gas kinetic temperature ( T {sub K} ) of various interstellar environments is often inferred from observations that can deduce level populations of atoms, ions, or molecules using spectral line observations; H i 21 cm is perhaps the most widely used, and has a long history. Usually the H i 21 cm line is assumed to be in thermal equilibrium and the populations are given by the Boltzmann distribution. A variety of processes, many involving Ly α , can affect the 21 cm line. Here we show how this is treated in the spectral simulation code Cloudy, and presentmore » numerical simulations of environments where this temperature indicator is used, with a detailed treatment of the physical processes that determine level populations within H{sup 0}. We discuss situations where this temperature indicator traces T {sub K}, cases where it fails, as well as the effects of Ly α pumping on the 21 cm spin temperature. We also show that the Ly α excitation temperature rarely traces the gas kinetic temperature.« less

  15. Global warming: Clouds cooled the Earth

    NASA Astrophysics Data System (ADS)

    Mauritsen, Thorsten

    2016-12-01

    The slow instrumental-record warming is consistent with lower-end climate sensitivity. Simulations and observations now show that changing sea surface temperature patterns could have affected cloudiness and thereby dampened the warming.

  16. Simulation of low clouds in the Southeast Pacific by the NCEP GFS: sensitivity to vertical mixing

    NASA Astrophysics Data System (ADS)

    Sun, R.; Moorthi, S.; Xiao, H.; Mechoso, C. R.

    2010-12-01

    The NCEP Global Forecast System (GFS) model has an important systematic error shared by many other models: stratocumuli are missed over the subtropical eastern oceans. It is shown that this error can be alleviated in the GFS by introducing a consideration of the low-level inversion and making two modifications in the model's representation of vertical mixing. The modifications consist of (a) the elimination of background vertical diffusion above the inversion and (b) the incorporation of a stability parameter based on the cloud-top entrainment instability (CTEI) criterion, which limits the strength of shallow convective mixing across the inversion. A control simulation and three experiments are performed in order to examine both the individual and combined effects of modifications on the generation of the stratocumulus clouds. Individually, both modifications result in enhanced cloudiness in the Southeast Pacific (SEP) region, although the cloudiness is still low compared to the ISCCP climatology. If the modifications are applied together, however, the total cloudiness produced in the southeast Pacific has realistic values. This nonlinearity arises as the effects of both modifications reinforce each other in reducing the leakage of moisture across the inversion. Increased moisture trapped below the inversion than in the control run without modifications leads to an increase in cloud amount and cloud-top radiative cooling. Then a positive feedback due to enhanced turbulent mixing in the planetary boundary layer by cloud-top radiative cooling leads to and maintains the stratocumulus cover. Although the amount of total cloudiness obtained with both modifications has realistic values, the relative contributions of low, middle, and high layers tend to differ from the observations. These results demonstrate that it is possible to simulate realistic marine boundary clouds in large-scale models by implementing direct and physically based improvements in the model parameterizations.

  17. Simulation of low clouds in the Southeast Pacific by the NCEP GFS: sensitivity to vertical mixing

    NASA Astrophysics Data System (ADS)

    Sun, R.; Moorthi, S.; Xiao, H.; Mechoso, C.-R.

    2010-08-01

    The NCEP Global Forecast System (GFS) model has an important systematic error shared by many other models: stratocumuli are missed over the subtropical eastern oceans. It is shown that this error can be alleviated in the GFS by introducing a consideration of the low-level inversion and making two modifications in the model's representation of vertical mixing. The modifications consist of (a) the elimination of background vertical diffusion above the inversion and (b) the incorporation of a stability parameter based on the cloud-top entrainment instability (CTEI) criterion, which limits the strength of shallow convective mixing across the inversion. A control simulation and three experiments are performed in order to examine both the individual and combined effects of modifications on the generation of the stratocumulus clouds. Individually, both modifications result in enhanced cloudiness in the Southeast Pacific (SEP) region, although the cloudiness is still low compared to the ISCCP climatology. If the modifications are applied together, however, the total cloudiness produced in the southeast Pacific has realistic values. This nonlinearity arises as the effects of both modifications reinforce each other in reducing the leakage of moisture across the inversion. Increased moisture trapped below the inversion than in the control run without modifications leads to an increase in cloud amount and cloud-top radiative cooling. Then a positive feedback due to enhanced turbulent mixing in the planetary boundary layer by cloud-top radiative cooling leads to and maintains the stratocumulus cover. Although the amount of total cloudiness obtained with both modifications has realistic values, the relative contributions of low, middle, and high layers tend to differ from the observations. These results demonstrate that it is possible to simulate realistic marine boundary clouds in large-scale models by implementing direct and physically based improvements in the model parameterizations.

  18. A novel method to improve MODIS AOD retrievals in cloudy pixels using an analog ensemble approach

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Raman, A.; Delle Monache, L.; Alessandrini, S.; Cheng, W. Y. Y.; Gaubert, B.; Arellano, A. F.

    2016-12-01

    Particulate matter (PM) concentrations are one of the fundamental indicators of air quality. Earth orbiting satellite platforms acquire column aerosol abundance that can in turn provide information about the PM concentrations. One of the serious limitations of column aerosol retrievals from low earth orbiting satellites is that these algorithms are based on clear sky assumptions. They do not retrieve AOD in cloudy pixels. After filtering cloudy pixels, these algorithms also arbitrarily remove brightest and darkest 25% of remaining pixels over ocean and brightest and darkest 50% pixels over land to filter any residual contamination from clouds. This becomes a critical issue especially in regions that experience monsoon, like Asia and North America. In case of North America, monsoon season experiences wide variety of extreme air quality events such as fires in California and dust storms in Arizona. Assessment of these episodic events warrants frequent monitoring of aerosol observations from remote sensing retrievals. In this study, we demonstrate a method to fill in cloudy pixels in Moderate Imaging Resolution Spectroradiometer (MODIS) AOD retrievals based on ensembles generated using an analog-based approach (AnEn). It provides a probabilistic distribution of AOD in cloudy pixels using historical records of model simulations of meteorological predictors such as AOD, relative humidity, and wind speed, and past observational records of MODIS AOD at a given target site. We use simulations from a coupled community weather forecasting model with chemistry (WRF-Chem) run at a resolution comparable to MODIS AOD. Analogs selected from summer months (June, July) of 2011-2013 from model and corresponding observations are used as a training dataset. Then, missing AOD retrievals in cloudy pixels in the last 31 days of the selected period are estimated. Here, we use AERONET stations as target sites to facilitate comparison against in-situ measurements. We use two approaches to evaluate the estimated AOD: 1) by comparing against reanalysis AOD, 2) by inverting AOD to PM10 concentrations and then comparing those with measured PM10. AnEn is an efficient approach to generate an ensemble as it involves only one model run and provides an estimate of uncertainty that complies with the physical and chemical state of the atmosphere.

  19. Regime-based evaluation of cloudiness in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin

    2017-01-01

    The concept of cloud regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating in each grid cell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product [long-term average total cloud amount (TCA)], cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our results support previous findings that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is still not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer cloud observations evaluated against ISCCP like another model output. Lastly, contrasting cloud simulation performance against each model's equilibrium climate sensitivity in order to gain insight on whether good cloud simulation pairs with particular values of this parameter, yields no clear conclusions.

  20. Photoionization modeling of the LWS fine-structure lines in IR bright galaxies

    NASA Technical Reports Server (NTRS)

    Satyapal, S.; Luhman, M. L.; Fischer, J.; Greenhouse, M. A.; Wolfire, M. G.

    1997-01-01

    The long wavelength spectrometer (LWS) fine structure line spectra from infrared luminous galaxies were modeled using stellar evolutionary synthesis models combined with photoionization and photodissociation region models. The calculations were carried out by using the computational code CLOUDY. Starburst and active galactic nuclei models are presented. The effects of dust in the ionized region are examined.

  1. Simutaneous Variational Retrievals of Temperature, Humidity, Surface and Cloud Properties from Satellite and Airborne Hyperspectral Infrared Sounder Data using the Havemann-Taylor Fast Radiative Transfer Code (HT-FRTC) as the Forward Model Operator

    NASA Astrophysics Data System (ADS)

    Havemann, S.; Thelen, J. C.; Harlow, R. C.

    2016-12-01

    Full scattering radiative transfer simulations for hyperspectral infrared and shortwave sounders are essential in order to be able to extract the maximal information content from these instruments for cloudy scenes and those with significant aerosol loading, but have been rarely done because of the high computational demands. The Havemann-Taylor Fast Radiative Transfer Code works in Principal Component space, reducing the computational demand by orders of magnitude thereby making fast simultaneous retrievals of vertical profiles of temperature and humidity, surface temperature and emissivity as well as cloud and aerosol properties feasible. Results of successful retrievals using IASI sounder data as well as data taken during flights of the Airborne Research Interferometer Evaluation System (ARIES) on board the FAAM Bae 146 aircraft will be presented. These will demonstrate that the use of all the instrument channels in PC space can provide valuable information both on temperature and humidity profiles relevant for NWP and on the cirrus cloud properties at the same time. There is very significant information on the humidity profile below semi-transparent cirrus to be gained from IR sounder data. The retrieved ice water content is in good agreement with airborne in-situ measurements during Lagrangian spiral descents. In addition to the full scattering calculations, the HT-FRTC has also been trained with a fast approximation to the scattering problem which reduces it to a clear-sky calculation but with a modified extinction (Chou scaling). Chou scaling is a reasonable approximation in the infrared but is very poor where the solar contribution becomes significant. The comparison of the retrieval performance with the full scattering solution and the Chou scaling solution in the forward model operator for infrared sounders shows that temperature and humidity profiles are only marginally degraded by the use of the Chou scaling approximation. Retrievals of the specific cloud parameters (ice water content, cirrus cloud thickness and cirrus cloud horizontal fraction) are however strongly negatively affected under the Chou scaling approximation. The aim is also to use HT-FRTC to run clear and cloudy simulations for the atmospheric state test set which has been prepared by the NASA/JPL/AIRS project.

  2. Changes in Extratropical Storm Track Cloudiness 1983-2008: Observational Support for a Poleward Shift

    NASA Technical Reports Server (NTRS)

    Bender, Frida A-M.; Rananathan, V.; Tselioudis, G.

    2012-01-01

    Climate model simulations suggest that the extratropical storm tracks will shift poleward as a consequence of global warming. In this study the northern and southern hemisphere storm tracks over the Pacific and Atlantic ocean basins are studied using observational data, primarily from the International Satellite Cloud Climatology Project, ISCCP. Potential shifts in the storm tracks are examined using the observed cloud structures as proxies for cyclone activity. Different data analysis methods are employed, with the objective to address difficulties and uncertainties in using ISCCP data for regional trend analysis. In particular, three data filtering techniques are explored; excluding specific problematic regions from the analysis, regressing out a spurious viewing geometry effect, and excluding specific cloud types from the analysis. These adjustments all, to varying degree, moderate the cloud trends in the original data but leave the qualitative aspects of those trends largely unaffected. Therefore, our analysis suggests that ISCCP data can be used to interpret regional trends in cloudiness, provided that data and instrumental artefacts are recognized and accounted for. The variation in magnitude between trends emerging from application of different data correction methods, allows us to estimate possible ranges for the observational changes. It is found that the storm tracks, here represented by the extent of the midlatitude-centered band of maximum cloud cover over the studied ocean basins, experience a poleward shift as well as a narrowing over the 25 year period covered by ISCCP. The observed magnitudes of these effects are larger than in current generation climate models (CMIP3). The magnitude of the shift is particularly large in the northern hemisphere Atlantic. This is also the one of the four regions in which imperfect data primarily prevents us from drawing firm conclusions. The shifted path and reduced extent of the storm track cloudiness is accompanied by a regional reduction in total cloud cover. This decrease in cloudiness can primarily be ascribed to low level clouds, whereas the upper level cloud fraction actually increases, according to ISCCP. Independent satellite observations of radiative fluxes at the top of the atmosphere are consistent with the changes in total cloud cover. The shift in cloudiness is also supported by a shift in central position of the mid-troposphere meridional temperature gradient. We do not find support for aerosols playing a significant role in the satellite observed changes in cloudiness. The observed changes in storm track cloudiness can be related to local cloud-induced changes in radiative forcing, using ERBE and CERES radiative fluxes. The shortwave and the longwave components are found to act together, leading to a positive (warming) net radiative effect in response to the cloud changes in the storm track regions, indicative of positive cloud feedback. Among the CMIP3 models that simulate poleward shifts in all four storm track areas, all but one show decreasing cloud amount on a global mean scale in response to increased CO2 forcing, further consistent with positive cloud feedback. Models with low equilibrium climate sensitivity to a lesser extent than higher-sensitivity models simulate a poleward shift of the storm tracks.

  3. 3MdB: the Mexican Million Models database

    NASA Astrophysics Data System (ADS)

    Morisset, C.; Delgado-Inglada, G.

    2014-10-01

    The 3MdB is an original effort to construct a large multipurpose database of photoionization models. This is a more modern version of a previous attempt based on Cloudy3D and IDL tools. It is accessed by MySQL requests. The models are obtained using the well known and widely used Cloudy photoionization code (Ferland et al, 2013). The database is aimed to host grids of models with different references to identify each project and to facilitate the extraction of the desired data. We present here a description of the way the database is managed and some of the projects that use 3MdB. Anybody can ask for a grid to be run and stored in 3MdB, to increase the visibility of the grid and the potential side applications of it.

  4. Integrated Efforts for Analysis of Geophysical Measurements and Models.

    DTIC Science & Technology

    1997-09-26

    12b. DISTRIBUTION CODE 13. ABSTRACT ( Maximum 200 words) This contract supported investigations of integrated applications of physics, ephemerides...REGIONS AND GPS DATA VALIDATIONS 20 2.5 PL-SCINDA: VISUALIZATION AND ANALYSIS TECHNIQUES 22 2.5.1 View Controls 23 2.5.2 Map Selection...and IR data, about cloudy pixels. Clustering and maximum likelihood classification algorithms categorize up to four cloud layers into stratiform or

  5. Revealing the sub-nanometere three-dimensional microscture of a metallic meteorite

    NASA Astrophysics Data System (ADS)

    Einsle, J. F.; Harrison, R.; Blukis, R.; Eggeman, A.; Saghi, Z.; Martineau, B.; Bagot, P.; Collins, S. M.; Midgley, P. A.

    2017-12-01

    Coming from from the core of differentiated planetesimals, iron-nickel meteorites provide some of the only direct material artefacts from planetary cores. Iron - nickel meteorites contain a record of their thermal and magnetic history, written in the intergrowth of iron-rich and nickel-rich phases that formed during slow cooling over millions of years. Of intense interest for understanding the thermal and magnetic history is the `'cloudy zone''. This nanoscale intergrowth that has recently been used to provide a record of magnetic activity on the parent body of stony-iron meteorites. The cloudy zone consists of islands of tetrataenite surrounded by a matrix phase, Here we use a multi-scale and multidimensional comparative study using high-resolution electron diffraction, scanning transmission electron tomography with chemical mapping, atom probe tomography and micromagnetic simulations to reveal the three-dimensional architecture of the cloudy zone with sub-nanometre spatial resolution. Machine learning data deconvolution strategies enable the three microanalytical techniques to converge on a consistent microstructural description for the cloudy zone. Isolated islands of tetrataenite are found, embedded in a continuous matrix of an FCC-supercell of Fe27Ni5 structure, never before identified in nature. The tetrataenite islands are arranged in clusters of three crystallographic variants, which control how magnetic information is encoded into the nanostructure during slow cooling. The new compositional, crystallographic and micromagnetic data have profound implications for how the cloudy zone acquires magnetic remanence, and requires a revision of the low-temperature metastable phase diagram of the Fe-Ni system. This can lead to a refinement of core dynamics in small planetoids.

  6. Regime-Based Evaluation of Cloudiness in CMIP5 Models

    NASA Technical Reports Server (NTRS)

    Jin, Daeho; Oraiopoulos, Lazaros; Lee, Dong Min

    2016-01-01

    The concept of Cloud Regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating for each gridcell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product (long-term average total cloud amount [TCA]), cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our findings support previous studies showing that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite their shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer (MODIS) cloud observations evaluated against ISCCP as if they were another model output. Lastly, cloud simulation performance is contrasted with each model's equilibrium climate sensitivity (ECS) in order to gain insight on whether good cloud simulation pairs with particular values of this parameter.

  7. Use of a GCM to Explore Sampling Issues in Connection with Satellite Remote Sensing of the Earth Radiation Budget

    NASA Technical Reports Server (NTRS)

    Fowler, Laura D.; Wielicki, Bruce A.; Randall, David A.; Branson, Mark D.; Gibson, Gary G.; Denn, Fredrick M.

    2000-01-01

    Collocated in time and space, top-of-the-atmosphere measurements of the Earth radiation budget (ERB) and cloudiness from passive scanning radiometers, and lidar- and radar-in-space measurements of multilayered cloud systems, are the required combination to improve our understanding of the role of clouds and radiation in climate. Experiments to fly multiple satellites "in formation" to measure simultaneously the radiative and optical properties of overlapping cloud systems are being designed. Because satellites carrying ERB experiments and satellites carrying lidars- or radars-in space have different orbital characteristics, the number of simultaneous measurements of radiation and clouds is reduced relative to the number of measurements made by each satellite independently. Monthly averaged coincident observations of radiation and cloudiness are biased when compared against more frequently sampled observations due, in particular, to the undersampling of their diurnal cycle, Using the Colorado State University General Circulation Model (CSU GCM), the goal of this study is to measure the impact of using simultaneous observations from the Earth Observing System (EOS) platform and companion satellites flying lidars or radars on monthly averaged diagnostics of longwave radiation, cloudiness, and its cloud optical properties. To do so, the hourly varying geographical distributions of coincident locations between the afternoon EOS (EOS-PM) orbit and the orbit of the ICESAT satellite set to fly at the altitude of 600 km, and between the EOS PM orbit and the orbits of the PICASSO satellite proposed to fly at the altitudes of 485 km (PICA485) or 705 km (PICA705), are simulated in the CSU GCM for a 60-month time period starting at the idealistic July 1, 2001, launch date. Monthly averaged diagnostics of the top-of-the-atmosphere, atmospheric, and surface longwave radiation budgets and clouds accumulated over grid boxes corresponding to satellite overpasses are compared against monthly averaged diagnostics obtained from hourly samplings over the entire globe. Results show that differences between irregularly (satellite) and regularly (true) sampled diagnostics of the longwave net radiative budgets are the greatest at the surface and the smallest in the atmosphere and at the top-of-the-atmosphere, under both cloud-free and cloudy conditions. In contrast, differences between the satellite and the true diagnostics of the longwave cloud radiative forcings are the largest in the atmosphere and at the top-of-the-atmosphere, and the smallest at the surface. A poorer diurnal sampling of the surface temperature in the satellite simulations relative to the true simulation contributes a major part to sampling biases in the longwave net radiative budgets, while a poorer diurnal sampling of cloudiness and its optical properties directly affects diagnostics of the longwave cloud radiative forcings. A factor of 8 difference in the number of satellite overpasses between PICA705 and PICA485 and ICESAT leads to a systematic factor of 3 difference in the spatial standard deviations of all radiative and cloudiness diagnostics.

  8. Performance of greenhouse gas profiling by infrared-laser and microwave occultation in cloudy air

    NASA Astrophysics Data System (ADS)

    Proschek, V.; Kirchengast, G.; Emde, C.; Schweitzer, S.

    2012-12-01

    ACCURATE is a proposed future satellite mission enabling simultaneous measurements of greenhouse gases (GHGs), wind and thermodynamic variables from Low Earth Orbit (LEO). The measurement principle is a combination of LEO-LEO infrared-laser occultation (LIO) and microwave occultation (LMO), the LMIO method, where the LIO signals are very sensitive to clouds. The GHG retrieval will therefore be strongly influenced by clouds in parts of the troposphere. The IR-laser signals, at wavelengths within 2--2.5μ m, are chosen to measure six GHGs (H2O, CO2, CH4, N2O, O3, CO; incl.~key isotopes 13CO2, C18OO, HDO). The LMO signals enable to co-measure the thermodynamic variables. In this presentation we introduce the algorithm to retrieve GHG profiles under cloudy-air conditions by using quasi-realistic forward simulations, including also influence of Rayleigh scattering, scintillations and aerosols. Data from CALIPSO--Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations--with highest vertical resolution of about 60 m and horizontal resolution of about 330 m were used for simulation of clouds. The IR-laser signals consist for each GHG of a GHG-sensitive and a close-by reference signal. The key process, ``differencing'' of these two signals, removes the atmospheric ``broadband'' effects, resulting in a pure GHG transmission profile. Very thin ice clouds, like sub-visible cirrus, are fairly transparent to the IR-laser signals, thicker and liquid water clouds block the signals. The reference signal is used to produce a cloud layering profile from zero to blocking clouds and is smoothed in a preprocess to suppress scintillations. Sufficiently small gaps, of width <2 km in the cloud layering profile, are found to enable a decent retrieval of entire GHG profiles over the UTLS under broken cloudiness and are therefore bridged by interpolation. Otherwise in case of essentially continuous cloudiness the profiles are found to terminate at cloud top level. The accuracy of retrieved GHG profiles is found better than 1% to 4% for single profiles in the UTLS region outside clouds and through broken cloudiness, and the profiles are essentially unbiased. Cloud gap-interpolation increases the tropospheric penetration of GHG profiles for scientific applications. The associated cloud layering profile provides quality-control information on cloud gap-interpolations, if they occured, and on cloud-top altitude for cloud blocking cases. The LMIO technique shows promising prospects for GHG monitoring even under cloudy-air conditions.

  9. A Study of the Role of Clouds in the Relationship Between Land Use/Land Cover and the Climate and Air Quality of the Atlanta Area

    NASA Technical Reports Server (NTRS)

    Kidder, Stanley Q.; Hafner, Jan

    1997-01-01

    The goal of Project ATLANTA is to derive a better scientific understanding of how land cover changes associated with urbanization affect local and regional climate and air quality. Clouds play a significant role in this relationship. Using GOES images, we found that in a 63-day period (5 July-5 September 1996) there were zero days which were clear for the entire daylight period. Days which are cloud-free in the morning become partly cloudy with small cumulus clouds in the afternoon in response to solar heating. This result casts doubt on the applicability of California-style air quality models which run in perpetual clear skies. Days which are clear in the morning have higher ozone than those which are cloudy in the morning. Using the RAMS model, we found that urbanization increases the skin surface temperature by about 1.0-1.5 C on average under cloudy conditions, with an extreme of +3.5 C. Clouds cool the surface due to their shading effect by 1.5-2.0 C on average, with an extreme of 5.0 C. RAMS simulates well the building stage of the cumulus cloud field, but does poorly in the decaying phase. Next year's work: doing a detailed cloud climatology and developing improved RAMS cloud simulations.

  10. Investigating the Temperature Problem in Narrow Line Emitting AGN

    NASA Astrophysics Data System (ADS)

    Jenkins, Sam; Richardson, Chris T.

    2018-06-01

    Our research investigates the physical conditions in gas clouds around the narrow line region of AGN. Specifically, we explore the necessary conditions for anomalously high electron temperatures, Te, in those clouds. Our 321 galaxy data set was acquired from SDSS DR14 after requiring S/N > 5.0 in [OIII] 4363 and S/N > 3.0 in all BPT diagram emission lines, to ensure both accurate Te and galaxy classification, with 0.04 < z < 1.0. Interestingly, our data set contained no LINERs. We ran simulations using the simulation code Cloudy, and focused on matching the emission exhibited by the hottest of the 70 AGN in our data set. We used multicore computing to cut down on run time, which drastically improved the efficiency of our simulations. We varied hydrogen density, ionization parameter, and metallicity, Z, only to find these three parameters alone were incapable of recreating anomalously high Te, but successfully matched galaxies showing low- to moderate Te. These highest temperature simulations were at low Z, and were able to facilitate higher temperatures because they avoided the cooling effects of high Z. Our most successful simulations varied Z and grain content, which matched approximately 10% of our high temperature data. Our simulations with the highest grain content produced the highest Te because of the photoelectric heating effect that grains provide, which we confirmed by monitoring each heating mechanism as a function of depth. In the near future, we plan to run simulations varying grain content and ionization parameter in order to study the effects these conditions have on gas cloud Te.

  11. Hyperspectral retrieval of surface reflectances: A new scheme

    NASA Astrophysics Data System (ADS)

    Thelen, Jean-Claude; Havemann, Stephan

    2013-05-01

    Here, we present a new prototype algorithm for the simultaneous retrieval of the atmospheric profiles (temperature, humidity, ozone and aerosol) and the surface reflectance from hyperspectral radiance measurements obtained from air/space borne, hyperspectral imagers. The new scheme, proposed here, consists of a fast radiative transfer code, based on empirical orthogonal functions (EOFs), in conjunction with a 1D-Var retrieval scheme. The inclusion of an 'exact' scattering code based on spherical harmonics, allows for an accurate treatment of Rayleigh scattering and scattering by aerosols, water droplets and ice-crystals, thus making it possible to also retrieve cloud and aerosol optical properties, although here we will concentrate on non-cloudy scenes.

  12. Simulating the escaping atmospheres of hot gas planets in the solar neighborhood

    NASA Astrophysics Data System (ADS)

    Salz, M.; Czesla, S.; Schneider, P. C.; Schmitt, J. H. M. M.

    2016-02-01

    Absorption of high-energy radiation in planetary thermospheres is generally believed to lead to the formation of planetary winds. The resulting mass-loss rates can affect the evolution, particularly of small gas planets. We present 1D, spherically symmetric hydrodynamic simulations of the escaping atmospheres of 18 hot gas planets in the solar neighborhood. Our sample only includes strongly irradiated planets, whose expanded atmospheres may be detectable via transit spectroscopy using current instrumentation. The simulations were performed with the PLUTO-CLOUDY interface, which couples a detailed photoionization and plasma simulation code with a general MHD code. We study the thermospheric escape and derive improved estimates for the planetary mass-loss rates. Our simulations reproduce the temperature-pressure profile measured via sodium D absorption in HD 189733 b, but show still unexplained differences in the case of HD 209458 b. In contrast to general assumptions, we find that the gravitationally more tightly bound thermospheres of massive and compact planets, such as HAT-P-2 b are hydrodynamically stable. Compact planets dispose of the radiative energy input through hydrogen Lyα and free-free emission. Radiative cooling is also important in HD 189733 b, but it decreases toward smaller planets like GJ 436 b. Computing the planetary Lyα absorption and emission signals from the simulations, we find that the strong and cool winds of smaller planets mainly cause strong Lyα absorption but little emission. Compact and massive planets with hot, stable thermospheres cause small absorption signals but are strong Lyα emitters, possibly detectable with the current instrumentation. The absorption and emission signals provide a possible distinction between these two classes of thermospheres in hot gas planets. According to our results, WASP-80 and GJ 3470 are currently the most promising targets for observational follow-up aimed at detecting atmospheric Lyα absorption signals. Simulated atmospheres are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/586/A75

  13. Fast All-Sky Radiation Model for Solar Applications (FARMS): A Brief Overview of Mechanisms, Performance, and Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Yu; Sengupta, Manajit

    Solar radiation can be computed using radiative transfer models, such as the Rapid Radiation Transfer Model (RRTM) and its general circulation model applications, and used for various energy applications. Due to the complexity of computing radiation fields in aerosol and cloudy atmospheres, simulating solar radiation can be extremely time-consuming, but many approximations--e.g., the two-stream approach and the delta-M truncation scheme--can be utilized. To provide a new fast option for computing solar radiation, we developed the Fast All-sky Radiation Model for Solar applications (FARMS) by parameterizing the simulated diffuse horizontal irradiance and direct normal irradiance for cloudy conditions from the RRTMmore » runs using a 16-stream discrete ordinates radiative transfer method. The solar irradiance at the surface was simulated by combining the cloud irradiance parameterizations with a fast clear-sky model, REST2. To understand the accuracy and efficiency of the newly developed fast model, we analyzed FARMS runs using cloud optical and microphysical properties retrieved using GOES data from 2009-2012. The global horizontal irradiance for cloudy conditions was simulated using FARMS and RRTM for global circulation modeling with a two-stream approximation and compared to measurements taken from the U.S. Department of Energy's Atmospheric Radiation Measurement Climate Research Facility Southern Great Plains site. Our results indicate that the accuracy of FARMS is comparable to or better than the two-stream approach; however, FARMS is approximately 400 times more efficient because it does not explicitly solve the radiative transfer equation for each individual cloud condition. Radiative transfer model runs are computationally expensive, but this model is promising for broad applications in solar resource assessment and forecasting. It is currently being used in the National Solar Radiation Database, which is publicly available from the National Renewable Energy Laboratory at http://nsrdb.nrel.gov.« less

  14. High excitation rovibrational molecular analysis in warm environments

    NASA Astrophysics Data System (ADS)

    Zhang, Ziwei; Stancil, Phillip C.; Cumbee, Renata; Ferland, Gary J.

    2017-06-01

    Inspired by advances in infrared observation (e.g., Spitzer, Herschel and ALMA), we investigate rovibrational emission CO and SiO in warm astrophysical environments. With recent innovation in collisional rate coefficients and rescaling methods, we are able to construct more comprehensive collisional data with high rovibrational states (vibration up to v=5 and rotation up to J=40) and multiple colliders (H2, H and He). These comprehensive data sets are used in spectral simulations with the radiative transfer codes RADEX and Cloudy. We obtained line ratio diagnostic plots and line spectra for both near- and far-infrared emission lines over a broad range of density and temperature for the case of a uniform medium. Considering the importance of both molecules in probing conditions and activities of UV-irradiated interstellar gas, we model rovibrational emission in photodissociation region (PDR) and AGB star envelopes (such as VY Canis Majoris, IK Tau and IRC +10216) with Cloudy. Rotational diagrams, energy distribution diagrams, and spectra are produced to examine relative state abundances, line emission intensity, and other properties. With these diverse models, we expect to have a better understanding of PDRs and expand our scope in the chemical architecture and evolution of AGB stars and other UV-irradiated regions. The soon to be launched James Webb Space Telescope (JWST) will provide high resolution observations at near- to mid-infrared wavelengths, which opens a new window to study molecular vibrational emission calling for more detailed chemical modeling and comprehensive laboratory astrophysics data on more molecules. This work was partially supported by NASA grants NNX12AF42G and NNX15AI61G. We thank Benhui Yang, Kyle Walker, Robert Forrey, and N. Balakrishnan for collaborating on the collisional data adopted in the current work.

  15. Impact of radiation frequency, precipitation radiative forcing, and radiation column aggregation on convection-permitting West African monsoon simulations

    NASA Astrophysics Data System (ADS)

    Matsui, Toshi; Zhang, Sara Q.; Lang, Stephen E.; Tao, Wei-Kuo; Ichoku, Charles; Peters-Lidard, Christa D.

    2018-03-01

    In this study, the impact of different configurations of the Goddard radiation scheme on convection-permitting simulations (CPSs) of the West African monsoon (WAM) is investigated using the NASA-Unified WRF (NU-WRF). These CPSs had 3 km grid spacing to explicitly simulate the evolution of mesoscale convective systems (MCSs) and their interaction with radiative processes across the WAM domain and were able to reproduce realistic precipitation and energy budget fields when compared with satellite data, although low clouds were overestimated. Sensitivity experiments reveal that (1) lowering the radiation update frequency (i.e., longer radiation update time) increases precipitation and cloudiness over the WAM region by enhancing the monsoon circulation, (2) deactivation of precipitation radiative forcing suppresses cloudiness over the WAM region, and (3) aggregating radiation columns reduces low clouds over ocean and tropical West Africa. The changes in radiation configuration immediately modulate the radiative heating and low clouds over ocean. On the 2nd day of the simulations, patterns of latitudinal air temperature profiles were already similar to the patterns of monthly composites for all radiation sensitivity experiments. Low cloud maintenance within the WAM system is tightly connected with radiation processes; thus, proper coupling between microphysics and radiation processes must be established for each modeling framework.

  16. Performance assessment of retrospective meteorological inputs for use in air quality modeling during TexAQS 2006

    NASA Astrophysics Data System (ADS)

    Ngan, Fong; Byun, Daewon; Kim, Hyuncheol; Lee, Daegyun; Rappenglück, Bernhard; Pour-Biazar, Arastoo

    2012-07-01

    To achieve more accurate meteorological inputs than was used in the daily forecast for studying the TexAQS 2006 air quality, retrospective simulations were conducted using objective analysis and 3D/surface analysis nudging with surface and upper observations. Model ozone using the assimilated meteorological fields with improved wind fields shows better agreement with the observation compared to the forecasting results. In the post-frontal conditions, important factors for ozone modeling in terms of wind patterns are the weak easterlies in the morning for bringing in industrial emissions to the city and the subsequent clockwise turning of the wind direction induced by the Coriolis force superimposing the sea breeze, which keeps pollutants in the urban area. Objective analysis and nudging employed in the retrospective simulation minimize the wind bias but are not able to compensate for the general flow pattern biases inherited from large scale inputs. By using an alternative analyses data for initializing the meteorological simulation, the model can re-produce the flow pattern and generate the ozone peak location closer to the reality. The inaccurate simulation of precipitation and cloudiness cause over-prediction of ozone occasionally. Since there are limitations in the meteorological model to simulate precipitation and cloudiness in the fine scale domain (less than 4-km grid), the satellite-based cloud is an alternative way to provide necessary inputs for the retrospective study of air quality.

  17. Estimating risks of heat strain by age and sex: a population-level simulation model.

    PubMed

    Glass, Kathryn; Tait, Peter W; Hanna, Elizabeth G; Dear, Keith

    2015-05-18

    Individuals living in hot climates face health risks from hyperthermia due to excessive heat. Heat strain is influenced by weather exposure and by individual characteristics such as age, sex, body size, and occupation. To explore the population-level drivers of heat strain, we developed a simulation model that scales up individual risks of heat storage (estimated using Myrup and Morgan's man model "MANMO") to a large population. Using Australian weather data, we identify high-risk weather conditions together with individual characteristics that increase the risk of heat stress under these conditions. The model identifies elevated risks in children and the elderly, with females aged 75 and older those most likely to experience heat strain. Risk of heat strain in males does not increase as rapidly with age, but is greatest on hot days with high solar radiation. Although cloudy days are less dangerous for the wider population, older women still have an elevated risk of heat strain on hot cloudy days or when indoors during high temperatures. Simulation models provide a valuable method for exploring population level risks of heat strain, and a tool for evaluating public health and other government policy interventions.

  18. The impact of different interstellar medium structures on the dynamical evolution of supernova remnants

    NASA Astrophysics Data System (ADS)

    Wang, Yueyang; Bao, Biwen; Yang, Chuyuan; Zhang, Li

    2018-05-01

    The dynamical properties of supernova remnants (SNRs) evolving with different interstellar medium structures are investigated through performing extensive two-dimensional magnetohydrodynamic (MHD) simulations in the cylindrical symmetry. Three cases of different interstellar medium structures are considered: the uniform medium, the turbulent medium and the cloudy medium. Large-scale density and magnetic fluctuations are calculated and mapped into the computational domain before simulations. The clouds are set by random distribution in advance. The above configuration allows us to study the time-dependent dynamical properties and morphological evolution of the SNR evolving with different ambient structures, along with the development of the instabilities at the contact discontinuity. Our simulation results indicate that remnant morphology deviates from symmetry if the interstellar medium contains clouds or turbulent density fluctuations. In the cloudy medium case, interactions between the shock wave and clouds lead to clouds' fragmentation. The magnetic field can be greatly enhanced by stretching field lines with a combination of instabilities while the width of amplification region is quite different among the three cases. Moreover, both the width of amplification region and the maximum magnetic-field strength are closely related to the clouds' density.

  19. Influences of drizzle on stratocumulus cloudiness and organization [Influences of drizzle on cloudiness and stratocumulus organization

    DOE PAGES

    Zhou, Xiaoli; Heus, Thijs; Kollias, Pavlos

    2017-06-06

    Large-eddy simulations are used to study the influence of drizzle on stratocumulus organization, based on measurements made as part of the Second Dynamics and Chemistry of the Marine Stratocumulus field study-II. Cloud droplet number concentration ( N c) is prescribed and considered as the proxy for different aerosol loadings. Our study shows that the amount of cloudiness does not decrease linearly with precipitation rate. An N c threshold is observed below which the removal of cloud water via precipitation efficiently reduces cloud depth, allowing evaporation to become efficient and quickly remove the remaining thin clouds, facilitating a fast transition frommore » closed cells to open cells. Using Fourier analysis, stratocumulus length scales are found to increase with drizzle rates. Raindrop evaporation below 300 m lowers the cloud bases and amplifies moisture variances in the subcloud layer, while it does not alter the horizontal scales in the cloud layer, suggesting that moist cold pool dynamic forcings are not essential for mesoscale organization of stratocumulus. Furthermore, the cloud scales are greatly increased when the boundary layer is too deep to maintain well mixed.« less

  20. Influences of drizzle on stratocumulus cloudiness and organization [Influences of drizzle on cloudiness and stratocumulus organization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Xiaoli; Heus, Thijs; Kollias, Pavlos

    Large-eddy simulations are used to study the influence of drizzle on stratocumulus organization, based on measurements made as part of the Second Dynamics and Chemistry of the Marine Stratocumulus field study-II. Cloud droplet number concentration ( N c) is prescribed and considered as the proxy for different aerosol loadings. Our study shows that the amount of cloudiness does not decrease linearly with precipitation rate. An N c threshold is observed below which the removal of cloud water via precipitation efficiently reduces cloud depth, allowing evaporation to become efficient and quickly remove the remaining thin clouds, facilitating a fast transition frommore » closed cells to open cells. Using Fourier analysis, stratocumulus length scales are found to increase with drizzle rates. Raindrop evaporation below 300 m lowers the cloud bases and amplifies moisture variances in the subcloud layer, while it does not alter the horizontal scales in the cloud layer, suggesting that moist cold pool dynamic forcings are not essential for mesoscale organization of stratocumulus. Furthermore, the cloud scales are greatly increased when the boundary layer is too deep to maintain well mixed.« less

  1. Cloudy Greenhouse on Noachian Mars

    NASA Astrophysics Data System (ADS)

    Toon, Owen B.; Wolf, E.; Urata, R. A.

    2013-10-01

    Urata and Toon (Icarus, Simulations of the martian hydrologic cycle with a general circulation model: Implications for the ancient martian climate 226, 229-250, 2013) show that a cloudy greenhouse, which likely needs to be induced by a large impact, can create a stable Martian climate during the Noachian with global average temperatures just below the freezing point. We also find, if frozen seas or extensive snowfields were present at mid-latitudes, that precipitation rates can be around 10 cm/yr, which is 10% of current terrestrial values, in certain regions. The regions favored with high precipitation rates vary with obliquity, and so they will sweep across the regions observed to have river valley networks over time. More than 200 mbar of CO2 must be present to maintain the greenhouse, mainly because efficient heat transport to the poles is required to prevent the water from being cold trapped at the poles. The era with extensive precipitation thus ended with the lowering of CO2 pressures below 200 mbar. In this talk we discuss the results of this modeling work for Mars and contrast it with similar work for the Archaen Earth, where we are not able to create a cloudy greenhouse, and instead water clouds cool the planet.

  2. A single field of view method for retrieving tropospheric temperature profiles from cloud-contaminated radiance data

    NASA Technical Reports Server (NTRS)

    Hodges, D. B.

    1976-01-01

    An iterative method is presented to retrieve single field of view (FOV) tropospheric temperature profiles directly from cloud-contaminated radiance data. A well-defined temperature profile may be calculated from the radiative transfer equation (RTE) for a partly cloudy atmosphere when the average fractional cloud amount and cloud-top height for the FOV are known. A cloud model is formulated to calculate the fractional cloud amount from an estimated cloud-top height. The method is then examined through use of simulated radiance data calculated through vertical integration of the RTE for a partly cloudy atmosphere using known values of cloud-top height(s) and fractional cloud amount(s). Temperature profiles are retrieved from the simulated data assuming various errors in the cloud parameters. Temperature profiles are retrieved from NOAA-4 satellite-measured radiance data obtained over an area dominated by an active cold front and with considerable cloud cover and compared with radiosonde data. The effects of using various guessed profiles and the number of iterations are considered.

  3. The Sensitivity of Tropical Squall Lines (GATE and TOGA COARE) to Surface Fluxes: Cloud Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Yansen; Tao, Wei-Kuo; Simpson, Joanne; Lang, Stephen

    1999-01-01

    Two tropical squall lines from TOGA COARE and GATE were simulated using a two-dimensional cloud-resolving model to examine the impact of surface fluxes on tropical squall line development and associated precipitation processes. The important question of how CAPE in clear and cloudy areas is maintained in the tropics is also investigated. Although the cloud structure and precipitation intensity are different between the TOGA COARE and GATE squall line cases, the effects of the surface fluxes on the amount of rainfall and on the cloud development processes are quite similar. The simulated total surface rainfall amount in the runs without surface fluxes is about 67% of the rainfall simulated with surface fluxes. The area where surface fluxes originated was categorized into clear and cloudy regions according to whether there was cloud in the vertical column. The model results indicated that the surface fluxes from the large clear air environment are the dominant moisture source for tropical squall line development even though the surface fluxes in the cloud region display a large peak. The high-energy air from the boundary layer in the clear area is what feeds the convection while the CAPE is removed by the convection. The surface rainfall was only reduced 8 to 9% percent in the simulations without surface fluxes in the cloud region. Trajectory and water budget analysis also indicated that most moisture (92%) was from the boundary layer of the clear air environment.

  4. Surface retrievals from Hyperion EO1 using a new, fast, 1D-Var based retrieval code

    NASA Astrophysics Data System (ADS)

    Thelen, Jean-Claude; Havemann, Stephan; Wong, Gerald

    2015-05-01

    We have developed a new algorithm for the simultaneous retrieval of the atmospheric profiles (temperature, humidity, ozone and aerosol) and the surface reflectance from hyperspectral radiance measurements obtained from air/space-borne, hyperspectral imagers such as Hyperion EO-1. The new scheme, proposed here, consists of a fast radiative transfer code, based on empirical orthogonal functions (EOFs), in conjunction with a 1D-Var retrieval scheme. The inclusion of an 'exact' scattering code based on spherical harmonics, allows for an accurate treatment of Rayleigh scattering and scattering by aerosols, water droplets and ice-crystals, thus making it possible to also retrieve cloud and aerosol optical properties, although here we will concentrate on non-cloudy scenes. We successfully tested this new approach using hyperspectral images taken by Hyperion EO-1, an experimental pushbroom imaging spectrometer operated by NASA.

  5. Interactions among Radiation, Convection, and Large-Scale Dynamics in a General Circulation Model.

    NASA Astrophysics Data System (ADS)

    Randall, David A.; Harshvardhan; Dazlich, Donald A.; Corsetti, Thomas G.

    1989-07-01

    We have analyzed the effects of radiatively active clouds on the climate simulated by the UCLA/GLA GCM, with particular attention to the effects of the upper tropospheric stratiform clouds associated with deep cumulus convection, and the interactions of these clouds with convection and the large-scale circulation.Several numerical experiments have been performed to investigate the mechanisms through which the clouds influence the large-scale circulation. In the `NODETLQ' experiment, no liquid water or ice was detrained from cumulus clouds into the environment; all of the condensate was rained out. Upper level supersaturation cloudiness was drastically reduced, the atmosphere dried, and tropical outgoing longwave radiation increased. In the `NOANVIL' experiment, the radiative effects of the optically thich upper-level cloud sheets associated with deep cumulus convection were neglected. The land surface received more solar radiation in regions of convection, leading to enhanced surface fluxes and a dramatic increase in precipitation. In the `NOCRF' experiment, the longwave atmospheric cloud radiative forcing (ACRF) was omitted, paralleling the recent experiment of Slingo and Slingo. The results suggest that the ACRF enhances deep penetrative convection and precipitation, while suppressing shallow convection. They also indicate that the ACRF warms and moistens the tropical troposphere. The results of this experiment are somewhat ambiguous, however; for example, the ACRF suppresses precipitation in some parts of the tropics, and enhances it in others.To isolate the effects of the ACRF in a simpler setting, we have analyzed the climate of an ocean-covered Earth, which we call Seaworld. The key simplicities of Seaworld are the fixed boundary temperature with no land points, the lack of mountains, and the zonal uniformity of the boundary conditions. Results are presented from two Seaworld simulations. The first includes a full suite of physical parameterizations, while the second omits all radiative effects of the clouds. The differences between the two runs are, therefore, entirely due to the direct and indirect and indirect effects of the ACRF. Results show that the ACRF in the cloudy run accurately represents the radiative heating perturbation relative to the cloud-free run. The cloudy run is warmer in the middle troposphere, contains much more precipitable water, and has about 15% more globally averaged precipitation. There is a double tropical rain band in the cloud-free run, and a single, more intense tropical rain band in the cloudy run. The cloud-free run produces relatively weak but frequent cumulus convection, while the cloudy run produces relatively intense but infrequent convection. The mean meridional circulation transport nearly twice as much mass in the cloudy run. The increased tropical rising motion in the cloudy run leads to a deeper boundary layer and also to more moisture in the troposphere above the boundary layer. This accounts for the increased precipitable water content of the atmosphere. The clouds lead to an increase in the intensity of the tropical easterlies, and cause the midlatitude westerly jets to shift equatorward.Taken together, our results show that upper tropospheric clouds associated with moist convection, whose importance has recently been emphasized in observational studies, play a very complex and powerful role in determining the model results. This points to a need to develop more realistic parameterizations of these clouds.

  6. A code for optically thick and hot photoionized media

    NASA Astrophysics Data System (ADS)

    Dumont, A.-M.; Abrassart, A.; Collin, S.

    2000-05-01

    We describe a code designed for hot media (T >= a few 104 K), optically thick to Compton scattering. It computes the structure of a plane-parallel slab of gas in thermal and ionization equilibrium, illuminated on one or on both sides by a given spectrum. Contrary to the other photoionization codes, it solves the transfer of the continuum and of the lines in a two stream approximation, without using the local escape probability formalism to approximate the line transfer. We stress the importance of taking into account the returning flux even for small column densities (1022 cm-2), and we show that the escape probability approximation can lead to strong errors in the thermal and ionization structure, as well as in the emitted spectrum, for a Thomson thickness larger than a few tenths. The transfer code is coupled with a Monte Carlo code which allows to take into account Compton and inverse Compton diffusions, and to compute the spectrum emitted up to MeV energies, in any geometry. Comparisons with cloudy show that it gives similar results for small column densities. Several applications are mentioned.

  7. A novel hybrid scattering order-dependent variance reduction method for Monte Carlo simulations of radiative transfer in cloudy atmosphere

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo

    2017-03-01

    We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.

  8. An investigation of the role of current and future remote sensing data systems in numerical meteorology

    NASA Technical Reports Server (NTRS)

    Diak, George R.; Smith, William L.

    1992-01-01

    A flexible system for performing observing system simulation experiments which made contributions to meteorology across all elements of the observing system simulation experiment (OSSE) components was developed. Future work will seek better understanding of the links between satellite-measured radiation and radiative transfer in the clear, cloudy and precipitating atmosphere and investigate how that understanding might be applied to improve the depiction of the initial state and the treatment of physical processes in forecast models of the atmosphere.

  9. Representation of Clear and Cloudy Boundary Layers in Climate Models. Chapter 14

    NASA Technical Reports Server (NTRS)

    Randall, D. A.; Shao, Q.; Branson, M.

    1997-01-01

    The atmospheric general circulation models which are being used as components of climate models rely on their boundary layer parameterizations to produce realistic simulations of the surface turbulent fluxes of sensible heat. moisture. and momentum: of the boundary-layer depth over which these fluxes converge: of boundary layer cloudiness: and of the interactions of the boundary layer with the deep convective clouds that grow upwards from it. Two current atmospheric general circulation models are used as examples to show how these requirements are being addressed: these are version 3 of the Community Climate Model. which has been developed at the U.S. National Center for Atmospheric Research. and the Colorado State University atmospheric general circulation model. The formulations and results of both models are discussed. Finally, areas for future research are suggested.

  10. Numerical simulations of significant orographic precipitation in Madeira island

    NASA Astrophysics Data System (ADS)

    Couto, Flavio Tiago; Ducrocq, Véronique; Salgado, Rui; Costa, Maria João

    2016-03-01

    High-resolution simulations of high precipitation events with the MESO-NH model are presented, and also used to verify that increasing horizontal resolution in zones of complex orography, such as in Madeira island, improve the simulation of the spatial distribution and total precipitation. The simulations succeeded in reproducing the general structure of the cloudy systems over the ocean in the four periods considered of significant accumulated precipitation. The accumulated precipitation over the Madeira was better represented with the 0.5 km horizontal resolution and occurred under four distinct synoptic situations. Different spatial patterns of the rainfall distribution over the Madeira have been identified.

  11. Nonlinear simulations of Jupiter's 5-micron hot spots

    NASA Technical Reports Server (NTRS)

    Showman, A. P.; Dowling, T. E.

    2000-01-01

    Large-scale nonlinear simulations of Jupiter's 5-micron hot spots produce long-lived coherent structures that cause subsidence in local regions, explaining the low cloudiness and the dryness measured by the Galileo probe inside a hot spot. Like observed hot spots, the simulated coherent structures are equatorially confined, have periodic spacing, propagate west relative to the flow, are generally confined to one hemisphere, and have an anticyclonic gyre on their equatorward side. The southern edge of the simulated hot spots develops vertical shear of up to 70 meters per second in the eastward wind, which can explain the results of the Galileo probe Doppler wind experiment.

  12. A cloudy planetary boundary layer oscillation arising from the coupling of turbulence with precipitation in climate simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, X.; Klein, S. A.; Ma, H. -Y.

    The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less

  13. A cloudy planetary boundary layer oscillation arising from the coupling of turbulence with precipitation in climate simulations

    DOE PAGES

    Zheng, X.; Klein, S. A.; Ma, H. -Y.; ...

    2017-08-24

    The Community Atmosphere Model (CAM) adopts Cloud Layers Unified By Binormals (CLUBB) scheme and an updated microphysics (MG2) scheme for a more unified treatment of cloud processes. This makes interactions between parameterizations tighter and more explicit. In this study, a cloudy planetary boundary layer (PBL) oscillation related to interaction between CLUBB and MG2 is identified in CAM. This highlights the need for consistency between the coupled subgrid processes in climate model development. This oscillation occurs most often in the marine cumulus cloud regime. The oscillation occurs only if the modeled PBL is strongly decoupled and precipitation evaporates below the cloud.more » Two aspects of the parameterized coupling assumptions between CLUBB and MG2 schemes cause the oscillation: (1) a parameterized relationship between rain evaporation and CLUBB's subgrid spatial variance of moisture and heat that induces an extra cooling in the lower PBL and (2) rain evaporation which happens at a too low an altitude because of the precipitation fraction parameterization in MG2. Either one of these two conditions can overly stabilize the PBL and reduce the upward moisture transport to the cloud layer so that the PBL collapses. Global simulations prove that turning off the evaporation-variance coupling and improving the precipitation fraction parameterization effectively reduces the cloudy PBL oscillation in marine cumulus clouds. By evaluating the causes of the oscillation in CAM, we have identified the PBL processes that should be examined in models having similar oscillations. This study may draw the attention of the modeling and observational communities to the issue of coupling between parameterized physical processes.« less

  14. Numerical Simulations of Supernova Remnant Evolution in a Cloudy Interstellar Medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slavin, Jonathan D.; Smith, Randall K.; Foster, Adam

    The mixed morphology class of supernova remnants has centrally peaked X-ray emission along with a shell-like morphology in radio emission. White and Long proposed that these remnants are evolving in a cloudy medium wherein the clouds are evaporated via thermal conduction once being overrun by the expanding shock. Their analytical model made detailed predictions regarding temperature, density, and emission profiles as well as shock evolution. We present numerical hydrodynamical models in 2D and 3D including thermal conduction, testing the White and Long model and presenting results for the evolution and emission from remnants evolving in a cloudy medium. We findmore » that, while certain general results of the White and Long model hold, such as the way the remnants expand and the flattening of the X-ray surface brightness distribution, in detail there are substantial differences. In particular we find that the X-ray luminosity is dominated by emission from shocked cloud gas early on, leading to a bright peak, which then declines and flattens as evaporation becomes more important. In addition, the effects of thermal conduction on the intercloud gas, which is not included in the White and Long model, are important and lead to further flattening of the X-ray brightness profile as well as lower X-ray emission temperatures.« less

  15. Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors

    NASA Astrophysics Data System (ADS)

    Kim, J.; Waliser, Duane E.; Mattmann, Chris A.; Goodale, Cameron E.; Hart, Andrew F.; Zimdars, Paul A.; Crichton, Daniel J.; Jones, Colin; Nikulin, Grigory; Hewitson, Bruce; Jack, Chris; Lennard, Christopher; Favre, Alice

    2014-03-01

    Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.

  16. Simple process-led algorithms for simulating habitats (SPLASH v.1.0): robust indices of radiation, evapotranspiration and plant-available moisture

    NASA Astrophysics Data System (ADS)

    Davis, Tyler W.; Prentice, I. Colin; Stocker, Benjamin D.; Thomas, Rebecca T.; Whitley, Rhys J.; Wang, Han; Evans, Bradley J.; Gallego-Sala, Angela V.; Sykes, Martin T.; Cramer, Wolfgang

    2017-02-01

    Bioclimatic indices for use in studies of ecosystem function, species distribution, and vegetation dynamics under changing climate scenarios depend on estimates of surface fluxes and other quantities, such as radiation, evapotranspiration and soil moisture, for which direct observations are sparse. These quantities can be derived indirectly from meteorological variables, such as near-surface air temperature, precipitation and cloudiness. Here we present a consolidated set of simple process-led algorithms for simulating habitats (SPLASH) allowing robust approximations of key quantities at ecologically relevant timescales. We specify equations, derivations, simplifications, and assumptions for the estimation of daily and monthly quantities of top-of-the-atmosphere solar radiation, net surface radiation, photosynthetic photon flux density, evapotranspiration (potential, equilibrium, and actual), condensation, soil moisture, and runoff, based on analysis of their relationship to fundamental climatic drivers. The climatic drivers include a minimum of three meteorological inputs: precipitation, air temperature, and fraction of bright sunshine hours. Indices, such as the moisture index, the climatic water deficit, and the Priestley-Taylor coefficient, are also defined. The SPLASH code is transcribed in C++, FORTRAN, Python, and R. A total of 1 year of results are presented at the local and global scales to exemplify the spatiotemporal patterns of daily and monthly model outputs along with comparisons to other model results.

  17. Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Wong, Takmeng; Allan, Richard; Slingo, Anthony; Kiehl, Jeffrey T.; Soden, Brian J.; Gordon, C. T.; Miller, Alvin J.; Yang, Shi-Keng; Randall, David R.; hide

    2001-01-01

    It is widely assumed that variations in the radiative energy budget at large time and space scales are very small. We present new evidence from a compilation of over two decades of accurate satellite data that the top-of-atmosphere (TOA) tropical radiative energy budget is much more dynamic and variable than previously thought. We demonstrate that the radiation budget changes are caused by changes In tropical mean cloudiness. The results of several current climate model simulations fall to predict this large observed variation In tropical energy budget. The missing variability in the models highlights the critical need to Improve cloud modeling in the tropics to support Improved prediction of tropical climate on Inter-annual and decadal time scales. We believe that these data are the first rigorous demonstration of decadal time scale changes In the Earth's tropical cloudiness, and that they represent a new and necessary test of climate models.

  18. Mapping the Distribution of Cloud Forests Using MODIS Imagery

    NASA Astrophysics Data System (ADS)

    Douglas, M. W.; Mejia, J.; Murillo, J.; Orozco, R.

    2007-05-01

    Tropical cloud forests - those forests that are frequently immersed in clouds or otherwise very humid, are extremely difficult to map from the ground, and are not easily distinguished in satellite imagery from other forest types, but they have a very different flora and fauna than lowland rainforest. Cloud forests, although found in many parts of the tropics, have a very restricted vertical extent and thus are also restricted horizontally. As a result, they are subject to both human disturbance (coffee growing for example) and the effects of possible climate change. Motivated by a desire to seek meteorological explanations for the distribution of cloud forests, we have begun to map cloudiness using MODIS Terra and Aqua visible imagery. This imagery, at ~1030 LT and 1330 LT, is an approximation for mid-day cloudiness. In tropical regions the amount of mid-day cloudiness strongly controls the shortwave radiation and thus the potential for evaporation (and aridity). We have mapped cloudiness using a simple algorithm that distinguishes between the cloud-free background brightness and the generally more reflective clouds to separate clouds from the underlying background. A major advantage of MODIS imagery over many other sources of satellite imagery is its high spatial resolution (~250m). This, coupled with precisely navigated images, means that detailed maps of cloudiness can be produced. The cloudiness maps can then be related to the underlying topography to further refine the location of the cloud forests. An advantage of this technique is that we are mapping the potential cloud forest, based on cloudiness, rather than the actual cloud forest, which are commonly based on forest estimates from satellite and digital elevation data. We do not derive precipitation, only estimates of daytime cloudiness. Although only a few years of MODIS imagery has been used in our studies, we will show that this is sufficient to describe the climatology of cloudiness with acceptable accuracy for its intended purposes. Even periods as short as one month are sufficient for depicting the location of most cloud forest environments. However, we are proceeding to distinguish different characteristics of cloud forests, depending on the overall frequency of cloudiness, the seasonality of cloudiness, and the interannual variability of cloudiness. These results should be useful to those seeking to describe relationships between the physical characteristics of the cloud forests and their biological environment.

  19. Climatic change by cloudiness linked to the spatial variability of sea surface temperatures

    NASA Technical Reports Server (NTRS)

    Otterman, J.

    1975-01-01

    An active role in modifying the earth's climate is suggested for low cloudiness over the circumarctic oceans. Such cloudiness, linked to the spatial differences in ocean surface temperatures, was studied. The temporal variations from year to year of ocean temperature patterns can be pronounced and therefore, the low cloudiness over this region should also show strong temporal variations, affecting the albedo of the earth and therefore the climate. Photographs are included.

  20. Investigation of Turbulent Entrainment-Mixing Processes With a New Particle-Resolved Direct Numerical Simulation Model

    DOE PAGES

    Gao, Zheng; Liu, Yangang; Li, Xiaolin; ...

    2018-02-19

    Here, a new particle-resolved three dimensional direct numerical simulation (DNS) model is developed that combines Lagrangian droplet tracking with the Eulerian field representation of turbulence near the Kolmogorov microscale. Six numerical experiments are performed to investigate the processes of entrainment of clear air and subsequent mixing with cloudy air and their interactions with cloud microphysics. The experiments are designed to represent different combinations of three configurations of initial cloudy area and two turbulence modes (decaying and forced turbulence). Five existing measures of microphysical homogeneous mixing degree are examined, modified, and compared in terms of their ability as a unifying measuremore » to represent the effect of various entrainment-mixing mechanisms on cloud microphysics. Also examined and compared are the conventional Damköhler number and transition scale number as a dynamical measure of different mixing mechanisms. Relationships between the various microphysical measures and dynamical measures are investigated in search for a unified parameterization of entrainment-mixing processes. The results show that even with the same cloud water fraction, the thermodynamic and microphysical properties are different, especially for the decaying cases. Further analysis confirms that despite the detailed differences in cloud properties among the six simulation scenarios, the variety of turbulent entrainment-mixing mechanisms can be reasonably represented with power-law relationships between the microphysical homogeneous mixing degrees and the dynamical measures.« less

  1. Investigation of Turbulent Entrainment-Mixing Processes With a New Particle-Resolved Direct Numerical Simulation Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zheng; Liu, Yangang; Li, Xiaolin

    Here, a new particle-resolved three dimensional direct numerical simulation (DNS) model is developed that combines Lagrangian droplet tracking with the Eulerian field representation of turbulence near the Kolmogorov microscale. Six numerical experiments are performed to investigate the processes of entrainment of clear air and subsequent mixing with cloudy air and their interactions with cloud microphysics. The experiments are designed to represent different combinations of three configurations of initial cloudy area and two turbulence modes (decaying and forced turbulence). Five existing measures of microphysical homogeneous mixing degree are examined, modified, and compared in terms of their ability as a unifying measuremore » to represent the effect of various entrainment-mixing mechanisms on cloud microphysics. Also examined and compared are the conventional Damköhler number and transition scale number as a dynamical measure of different mixing mechanisms. Relationships between the various microphysical measures and dynamical measures are investigated in search for a unified parameterization of entrainment-mixing processes. The results show that even with the same cloud water fraction, the thermodynamic and microphysical properties are different, especially for the decaying cases. Further analysis confirms that despite the detailed differences in cloud properties among the six simulation scenarios, the variety of turbulent entrainment-mixing mechanisms can be reasonably represented with power-law relationships between the microphysical homogeneous mixing degrees and the dynamical measures.« less

  2. The robustness of using near-UV observations to detect and study exoplanet magnetic fields

    NASA Astrophysics Data System (ADS)

    Turner, J.; Christie, D.; Arras, P.; Johnson, R.

    2015-10-01

    Studying the magnetic fields of exoplanets will allow for the investigation of their formation history, evolution, interior structure, rotation period, atmospheric dynamics, moons, and potential habitability. We previously observed the transits of 16 exoplanets as they crossed the face of their host-star in the near-UV in an attempt to detect their magnetic fields (Turner et al. 2013; Pearson et al. 2014; Turner et al. in press). It was postulated that the magnetic fields of all our targets could be constrained if their near-UV light curves start earlier than in their optical light curves (Vidotto et al. 2011). This effect can be explained by the presence of a bow shock in front of the planet formed by interactions between the stellar coronal material and the planet's magnetosphere. Furthermore, if the shocked material in the magnetosheath is optically thick, it will absorb starlight and cause an early ingress in the near- UV light curve. We do not observe an early ingress in any of our targets (See Figure 1 for an example light curve in our study), but determine upper limits on their magnetic field strengths. All our magnetic field upper limits are well below the predicted magnetic field strengths for hot Jupiters (Reiners & Christensen 2010; Sanchez-Lavega 2004). The upper limits we derived assume that there is an absorbing species in the near-UV. Therefore, our upper limits cannot be trusted if there is no species to cause the absorption. In this study we simulate the atomic physics, chemistry, radiation transport, and dynamics of the plasma characteristics in the vicinity of a hot Jupiter using the widely used radiative transfer code CLOUDY (Ferland et al. 2013). Using CLOUDY we have investigated whether there is an absorption species in the near-UV that can exist to cause an observable early ingress. The number density of hydrogen in the bow shock was varied from 104 - -108 cm-3 and the output spectrum was calculated (Figure 2) and compared to the input spectrum to mimic a transit like event (Figure 3). We find that there isn't a species in the near-UV that can cause an absorption under the conditions (T = 1×106 K, semi-major axis of 0.02 AU, solar input spectrum, solar metallicity) of a transiting hot Jupiter (Figure 3). Therefore, our upper limits can not be trusted. We can eventually use CLOUDY to explore the escaping atmospheres from hot Jupiters. We can still use our data to constrain the atmospheric proprieties of the exoplanets.

  3. EAULIQ: The Next Generation

    NASA Technical Reports Server (NTRS)

    Randall, David A.; Fowler, Laura D.

    1999-01-01

    This report summarizes the design of a new version of the stratiform cloud parameterization called Eauliq; the new version is called Eauliq NG. The key features of Eauliq NG are: (1) a prognostic fractional area covered by stratiform cloudiness, following the approach developed by M. Tiedtke for use in the ECMWF model; (2) separate prognostic thermodynamic variables for the clear and cloudy portions of each grid cell; (3) separate vertical velocities for the clear and cloudy portions of each grid cell, allowing the model to represent some aspects of observed mesoscale circulations; (4) cumulus entrainment from both the clear and cloudy portions of a grid cell, and cumulus detrainment into the cloudy portion only; and (5) the effects of the cumulus-induced subsidence in the cloudy portion of a grid cell on the cloud water and ice there. In this paper we present the mathematical framework of Eauliq NG; a discussion of cumulus effects; a new parameterization of lateral mass exchanges between clear and cloudy regions; and a theory to determine the mesoscale mass circulation, based on the hypothesis that the stratiform clouds remain neutrally buoyant through time and that the mesoscale circulations are the mechanism which makes this possible. An appendix also discusses some time-differencing methods.

  4. Effects of Implementing Subgrid-Scale Cloud-Radiation Interactions in a Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Herwehe, J. A.; Alapaty, K.; Otte, T.; Nolte, C. G.

    2012-12-01

    Interactions between atmospheric radiation, clouds, and aerosols are the most important processes that determine the climate and its variability. In regional scale models, when used at relatively coarse spatial resolutions (e.g., larger than 1 km), convective cumulus clouds need to be parameterized as subgrid-scale clouds. Like many groups, our regional climate modeling group at the EPA uses the Weather Research & Forecasting model (WRF) as a regional climate model (RCM). One of the findings from our RCM studies is that the summertime convective systems simulated by the WRF model are highly energetic, leading to excessive surface precipitation. We also found that the WRF model does not consider the interactions between convective clouds and radiation, thereby omitting an important process that drives the climate. Thus, the subgrid-scale cloudiness associated with convective clouds (from shallow cumuli to thunderstorms) does not exist and radiation passes through the atmosphere nearly unimpeded, potentially leading to overly energetic convection. This also has implications for air quality modeling systems that are dependent upon cloud properties from the WRF model, as the failure to account for subgrid-scale cloudiness can lead to problems such as the underrepresentation of aqueous chemistry processes within clouds and the overprediction of ozone from overactive photolysis. In an effort to advance the climate science of the cloud-aerosol-radiation (CAR) interactions in RCM systems, as a first step we have focused on linking the cumulus clouds with the radiation processes. To this end, our research group has implemented into WRF's Kain-Fritsch (KF) cumulus parameterization a cloudiness formulation that is widely used in global earth system models (e.g., CESM/CAM5). Estimated grid-scale cloudiness and associated condensate are adjusted to account for the subgrid clouds and then passed to WRF's Rapid Radiative Transfer Model - Global (RRTMG) radiation schemes to affect the shortwave and longwave radiative processes. To evaluate the effects of implementing the subgrid-scale cloud-radiation interactions on WRF regional climate simulations, a three-year study period (1988-1990) was simulated over the CONUS using two-way nested domains with 108 km and 36 km horizontal grid spacing, without and with the cumulus feedbacks to radiation, and without and with some form of four dimensional data assimilation (FDDA). Initial and lateral boundary conditions (as well as data for the FDDA, when enabled) were supplied from downscaled NCEP-NCAR Reanalysis II (R2) data sets. Evaluation of the simulation results will be presented comparing regional surface precipitation and temperature statistics with North American Regional Reanalysis (NARR) data and Climate Forecast System Reanalysis (CFSR) data, respectively, as well as comparison with available surface radiation (SURFRAD) and satellite (CERES) observations. This research supports improvements in the EPA's WRF-CMAQ modeling system, leading to better predictions of present and future air quality and climate interactions in order to protect human health and the environment.

  5. ATTENUATION OF VISIBLE SUNLIGHT BY LIMITED VISIBILITY AND CLOUDINESS

    EPA Science Inventory

    Variability in the irradiance measurements arises from systematic changes in the solar zenith angle (SZA), cloudiness and changing visibility. Measurements in the wavelength band centered on 612 nm serve as a reference for the initial characterization of the effects of cloudy ...

  6. Analysis of the Diurnal Cycle of Precipitation and its Relation to Cloud Radiative Forcing Using TRMM Products

    NASA Technical Reports Server (NTRS)

    Randall, David A.; Fowler, Laura D.; Lin, Xin

    1998-01-01

    In order to improve our understanding of the interactions between clouds, radiation, and the hydrological cycle simulated in the Colorado State University General Circulation Model (CSU GCM), we focused our research on the analysis of the diurnal cycle of precipitation, top-of-the-atmosphere and surface radiation budgets, and cloudiness using 10-year long Atmospheric Model Intercomparison Project (AMIP) simulations. Comparisons the simulated diurnal cycle were made against the diurnal cycle of Earth Radiation Budget Experiment (ERBE) radiation budget and International Satellite Cloud Climatology Project (ISCCP) cloud products. This report summarizes our major findings over the Amazon Basin.

  7. Establishing the common patterns of future tropospheric ozone under diverse climate change scenarios

    NASA Astrophysics Data System (ADS)

    Jimenez-Guerrero, Pedro; Gómez-Navarro, Juan J.; Jerez, Sonia; Lorente-Plazas, Raquel; Baro, Rocio; Montávez, Juan P.

    2013-04-01

    The impacts of climate change on air quality may affect long-term air quality planning. However, the policies aimed at improving air quality in the EU directives have not accounted for the variations in the climate. Climate change alone influences future air quality through modifications of gas-phase chemistry, transport, removal, and natural emissions. As such, the aim of this work is to check whether the projected changes in gas-phase air pollution over Europe depends on the scenario driving the regional simulation. For this purpose, two full-transient regional climate change-air quality projections for the first half of the XXI century (1991-2050) have been carried out with MM5+CHIMERE system, including A2 and B2 SRES scenarios. Experiments span the periods 1971-2000, as a reference, and 2071-2100, as future enhanced greenhouse gas and aerosol scenarios (SRES A2 and B2). The atmospheric simulations have a horizontal resolution of 25 km and 23 vertical layers up to 100 mb, and were driven by ECHO-G global climate model outputs. The analysis focuses on the connection between meteorological and air quality variables. Our simulations suggest that the modes of variability for tropospheric ozone and their main precursors hardly change under different SRES scenarios. The effect of changing scenarios has to be sought in the intensity of the changing signal, rather than in the spatial structure of the variation patterns, since the correlation between the spatial patterns of variability in A2 and B2 simulation is r > 0.75 for all gas-phase pollutants included in this study. In both cases, full-transient simulations indicate an enhanced enhanced chemical activity under future scenarios. The causes for tropospheric ozone variations have to be sought in a multiplicity of climate factors, such as increased temperature, different distribution of precipitation patterns across Europe, increased photolysis of primary and secondary pollutants due to lower cloudiness, etc. Nonetheless, according to the results of this work future ozone is conditioned by the dependence of biogenic emissions on the climatological patterns of variability. In this sense, ozone over Europe is mainly driven by the warming-induced increase in biogenic emitting activity (vegetation is kept invariable in the simulations, but estimations of these emissions strongly depends on shortwave radiation and temperature, which are substantially modified in climatic simulations). Moreover, one of the most important drivers for ozone increase is the decrease of cloudiness (related to stronger solar radiation) mostly over southern Europe at the first half of the XXI century. However, given the large uncertainty isoprene sensitivity to climate change and the large uncertainties associated to the cloudiness projections, these results should be carefully considered.

  8. NCAR CCM2 simulation of the modern Antarctic climate

    NASA Technical Reports Server (NTRS)

    Tzeng, Ren-Yow; Bromwich, David H.; Parish, Thomas R.; Chen, Biao

    1994-01-01

    The National Center for Atmospheric Research (NCAR) community climate model version 2 (CCM2) simulation of the circumpolar trough, surface air temperature, the polar vortex, cloudiness, winds, and atmospheric moisture and energy budgets are examined to validate the model's representation of the present-day Antarctic climate. The results show that the CCM2 can well simulate many important climate features over Antarctica, such as the location and intensity of the circumpolar trough, the coreless winter over the plateau, the intensity and horizontal distribution of the surface inversion, the speed and streamline pattern of the katabatic winds, the double jet stream feature over the southern Indian and Pacific oceans, and the arid climate over the continent. However, there are also some serious errors in the model. Some are due to old problems but some are caused by the new parameterizations in the model. The model errors over high southern latitudes can be summarized as follows: The circumpolar trough, the polar vertex, and the westerlies in midlatitudes are too strong; the semiannual cycle of the circumpolar trough is distorted compared to the observations; the low centers of the circumpolar trough and the troughs in the middle and upper troposphere are shifted eastward by 15 deg - 40 deg longitude; the surface temperatures are too cold over the plateau in summer and over the coastline in winter; the polar tropopause continues to have a cold bias; and the cloudiness is too high over the continent. These biases are induced by two major factors: (1) the cloud optical properties in tropical and middle latitudes, which cause the eastward shift of troughs and surface low centers and the error in the semiannual cycle, and (2) the cold bias of the surface air temperature, which is attributed to the oversimulation of cloudiness over the continent, especially during summer, and the uniform 2-m-thick sea ice. The constant thickness of sea ice suppresses the energy flux from the ocean to the atmosphere and hence reduces the air temperature near the coast during winter. Finally, although the simulated Antarctic climate still suffers these biases, the overall performance of the CCM2 is much better than that of the CCM1-T42. Therefore the CCM2 is good enough to be used for climate change studies, especially over Antarctica.

  9. Do rising temperatures always increase forest productivity? Interacting effects of temperature, precipitation, cloudiness and soil texture on tree species growth and competition

    Treesearch

    Eric J. Gustafson; Brian R. Miranda; Arjan M.G. De Bruijn; Brian R. Sturtevant; Mark E. Kubiske

    2017-01-01

    Forest landscape models (FLM) are increasingly used to project the effects of climate change on forested landscapes, yet most use phenomenological approaches with untested assumptions about future forest dynamics. We used a FLM that relies on first principles to mechanistically simulate growth (LANDIS-II with PnET-Succession) to systematically explore how landscapes...

  10. On the existence of tropical anvil clouds

    NASA Astrophysics Data System (ADS)

    Seeley, J.; Jeevanjee, N.; Langhans, W.; Romps, D.

    2017-12-01

    In the deep tropics, extensive anvil clouds produce a peak in cloud cover below the tropopause. The dominant paradigm for cloud cover attributes this anvil peak to a layer of enhanced mass convergence in the clear-sky upper-troposphere, which is presumed to force frequent detrainment of convective anvils. However, cloud cover also depends on the lifetime of cloudy air after it detrains, which raises the possibility that anvil clouds may be the signature of slow cloud decay rather than enhanced detrainment. Here we measure the cloud decay timescale in cloud-resolving simulations, and find that cloudy updrafts that detrain in the upper troposphere take much longer to dissipate than their shallower counterparts. We show that cloud lifetimes are long in the upper troposphere because the saturation specific humidity becomes orders of magnitude smaller than the typical condensed water loading of cloudy updrafts. This causes evaporative cloud decay to act extremely slowly, thereby prolonging cloud lifetimes in the upper troposphere. As a consequence, extensive anvil clouds still occur in a convecting atmosphere that is forced to have no preferential clear-sky convergence layer. On the other hand, when cloud lifetimes are fixed at a characteristic lower-tropospheric value, extensive anvil clouds do not form. Our results support a revised understanding of tropical anvil clouds, which attributes their existence to the microphysics of slow cloud decay rather than a peak in clear-sky convergence.

  11. Technical report series on global modeling and data assimilation. Volume 3: An efficient thermal infrared radiation parameterization for use in general circulation models

    NASA Technical Reports Server (NTRS)

    Suarex, Max J. (Editor); Chou, Ming-Dah

    1994-01-01

    A detailed description of a parameterization for thermal infrared radiative transfer designed specifically for use in global climate models is presented. The parameterization includes the effects of the main absorbers of terrestrial radiation: water vapor, carbon dioxide, and ozone. While being computationally efficient, the schemes compute very accurately the clear-sky fluxes and cooling rates from the Earth's surface to 0.01 mb. This combination of accuracy and speed makes the parameterization suitable for both tropospheric and middle atmospheric modeling applications. Since no transmittances are precomputed the atmospheric layers and the vertical distribution of the absorbers may be freely specified. The scheme can also account for any vertical distribution of fractional cloudiness with arbitrary optical thickness. These features make the parameterization very flexible and extremely well suited for use in climate modeling studies. In addition, the numerics and the FORTRAN implementation have been carefully designed to conserve both memory and computer time. This code should be particularly attractive to those contemplating long-term climate simulations, wishing to model the middle atmosphere, or planning to use a large number of levels in the vertical.

  12. 21cm Absorption Line Zeeman Observations And Modeling Of Physical Conditions In M16

    NASA Astrophysics Data System (ADS)

    Kiuchi, Furea; Brogan, C.; Troland, T.

    2011-01-01

    We present detailed 21 cm HI absorption line observations of M16 using the Very Large Array. The M16 "pillars of creation" are classic examples of the interaction of ISM with radiation from young, hot stars. Magnetic fields can affect these interactions, the 21 cm Zeeman effect reveals magnetic field strengths in the Photodissociation regions associated with the pillars. The present results yield a 3-sigma upper limit upon the line-of-sight magnetic field of about 300 microgauss. This limit is consistent with a total field strength of 500 microgauss, required in the molecular gas if magnetic energies and turbulent energies in the pillars are in equipartition. Most likely, magnetic fields do not play a dominant role in the dynamics of the M16 pillars. Another goal of this study is to determine the distribution of cold HI in the M16 region and to model the physical conditions in the neutral gas in the pillars. We used the spectral synthesis code Cloudy 08.00 for this purpose. We adopted the results of a published Cloudy HII region model and extended this model into the neutral gas to derive physical conditions therein.

  13. Observations of Orion in all four 18 cm OH Thermal Absoprtion Lines

    NASA Astrophysics Data System (ADS)

    Moore, Amber M.; Momjian, Emmanuel; Troland, Thomas; Sarma, Anuj; Greisen, Eric

    2018-01-01

    We present results obtained with Karl G. Jansky Very Large Array (VLA) D-configuration observations of the 18 cm OH absorption lines in the Orion Veil; a sheet of material 2-4 pc in front of the Trapezium stars. The goals of these observations were to (a) measure the magnetic field through the Zeeman effect using the 18 cm OH mainlines at 1665 and 1667 MHz and compare the results with those obtained with the pre upgrade VLA, (b) observe all four 18 cm OH lines (the two mainlines and the two satellite lines at 1612 and 1720 MHz) to infer physical conditions in the absorbing regions. For the first goal, we found that the more recent measurements are comparable to the earlier published results. To achieve the second goal, we plan to use the Cloudy spectral synthesis code to model physical conditions based upon observations of all four 18 cm OH lines. We also anticipate using Cloudy to assess the viability of a model previously applied to the M17 PDR in which the magnetic field of the Veil is in hydrostatic equilibrium with radiation pressure of stellar uv from the Trapezium.

  14. HS 1603+3820 and its Warm Absorber

    NASA Astrophysics Data System (ADS)

    Nikołajuk, M.; Różańska, A.; Czerny, B.; Dobrzycki, A.

    2009-07-01

    We use photoionization codes CLOUDY and TITAN to obtain physical conditions in the absorbing medium close to the nucleus of a distant quasar (z = 2.54) HS 1603+3820. We found that the total column density of this Warm Absorber is 2 x 1022 cm-2. Due to the softness of the quasars spectrum the modelling allowed us also to determine uniquely the volume hydrogen density of this warm gas (n = 1010 cm-3) which combined with the other quasar parameters leads to a distance determination to the Warm Absorber from the central source which is ~ 1.5 x 1016 cm.

  15. MODIS Collection 6 Clear Sky Restoral (CSR): Filtering Cloud Mast 'Not Clear' Pixels

    NASA Technical Reports Server (NTRS)

    Meyer, Kerry G.; Platnick, Steven Edward; Wind, Galina; Riedi, Jerome

    2014-01-01

    Correctly identifying cloudy pixels appropriate for the MOD06 cloud optical and microphysical property retrievals is accomplished in large part using results from the MOD35 1km cloud mask tests (note there are also two 250m subpixel cloud mask tests that can convert the 1km cloudy designations to clear sky). However, because MOD35 is by design clear sky conservative (i.e., it identifies "not clear" pixels), certain situations exist in which pixels identified by MOD35 as "cloudy" are nevertheless likely to be poor retrieval candidates. For instance, near the edge of clouds or within broken cloud fields, a given 1km MODIS field of view (FOV) may in fact only be partially cloudy. This can be problematic for the MOD06 retrievals because in these cases the assumptions of a completely overcast homogenous cloudy FOV and 1-dimensional plane-parallel radiative transfer no longer hold, and subsequent retrievals will be of low confidence. Furthermore, some pixels may be identified by MOD35 as "cloudy" for reasons other than the presence of clouds, such as scenes with thick smoke or lofted dust, and should therefore not be retrieved as clouds. With such situations in mind, a Clear Sky Restoral (CSR) algorithm was introduced in C5 that attempts to identify pixels expected to be poor retrieval candidates. Table 1 provides SDS locations for CSR and partly cloudy (PCL) pixels.

  16. Quantifying errors in surface ozone predictions associated with clouds over the CONUS: a WRF-Chem modeling study using satellite cloud retrievals

    NASA Astrophysics Data System (ADS)

    Ryu, Young-Hee; Hodzic, Alma; Barre, Jerome; Descombes, Gael; Minnis, Patrick

    2018-05-01

    Clouds play a key role in radiation and hence O3 photochemistry by modulating photolysis rates and light-dependent emissions of biogenic volatile organic compounds (BVOCs). It is not well known, however, how much error in O3 predictions can be directly attributed to error in cloud predictions. This study applies the Weather Research and Forecasting with Chemistry (WRF-Chem) model at 12 km horizontal resolution with the Morrison microphysics and Grell 3-D cumulus parameterization to quantify uncertainties in summertime surface O3 predictions associated with cloudiness over the contiguous United States (CONUS). All model simulations are driven by reanalysis of atmospheric data and reinitialized every 2 days. In sensitivity simulations, cloud fields used for photochemistry are corrected based on satellite cloud retrievals. The results show that WRF-Chem predicts about 55 % of clouds in the right locations and generally underpredicts cloud optical depths. These errors in cloud predictions can lead to up to 60 ppb of overestimation in hourly surface O3 concentrations on some days. The average difference in summertime surface O3 concentrations derived from the modeled clouds and satellite clouds ranges from 1 to 5 ppb for maximum daily 8 h average O3 (MDA8 O3) over the CONUS. This represents up to ˜ 40 % of the total MDA8 O3 bias under cloudy conditions in the tested model version. Surface O3 concentrations are sensitive to cloud errors mainly through the calculation of photolysis rates (for ˜ 80 %), and to a lesser extent to light-dependent BVOC emissions. The sensitivity of surface O3 concentrations to satellite-based cloud corrections is about 2 times larger in VOC-limited than NOx-limited regimes. Our results suggest that the benefits of accurate predictions of cloudiness would be significant in VOC-limited regions, which are typical of urban areas.

  17. Observed and modeled patterns of covariability between low-level cloudiness and the structure of the trade-wind layer

    DOE PAGES

    Nuijens, Louise; Medeiros, Brian; Sandu, Irina; ...

    2015-11-06

    We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratificationmore » at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.« less

  18. Optimized fractional cloudiness determination from five ground-based remote sensing techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boers, R.; de Haij, M. J.; Wauben, W.M.F.

    2010-12-23

    A one-year record of fractional cloudiness at 10 minute intervals was generated for the Cabauw Experimental Site for Atmospheric Research [CESAR] (51°58’N, 4° 55’E) using an integrated assessment of five different observational methods. The five methods are based on active as well as passive systems and use either a hemispheric or column remote sensing technique. The one-year instrumental cloudiness data were compared against a 30 year climatology of Observer data in the vicinity of CESAR [1971- 2000]. In the intermediate 2 - 6 octa range, most instruments, but especially the column methods, report lower frequency of occurrence of cloudiness thanmore » the absolute minimum values from the 30 year Observer climatology. At night, the Observer records less clouds in the 1, 2 octa range than during the day, while the instruments registered more clouds. During daytime the Observer also records much more 7 octa cloudiness than the instruments. One column method combining a radar with a lidar outstrips all other techniques in recording cloudiness, even up to height in excess of 9 km. This is mostly due to the high sensitivity of the radar that is used in the technique. A reference algorithm was designed to derive a continuous and optimized record of fractional cloudiness. Output from individual instruments were weighted according to the cloud base height reported at the observation time; the larger the height, the lower the weight. The algorithm was able to provide fractional cloudiness observations every 10 minutes for 98% of the total period of 12 months [15 May 2008 - 14 May 2009].« less

  19. Observed and modeled patterns of covariability between low-level cloudiness and the structure of the trade-wind layer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuijens, Louise; Medeiros, Brian; Sandu, Irina

    We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratificationmore » at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.« less

  20. Properties of Highly Rotationally Excited H2 in Photodissociation Regions

    NASA Astrophysics Data System (ADS)

    Cummings, Sally Jane; Wan, Yier; Stancil, Phillip C.; Yang, Benhui H.; Zhang, Ziwei

    2018-06-01

    H2 is the dominant molecular species in the vast majority of interstellar environments and it plays a crucial role as a radiative coolant. In photodissociation regions, it is one of the primary emitters in the near to mid-infrared which are due to lines originating from highly excited rotational levels. However, collisional data for rotational levels j>10 are sparse, particularly for H2-H2 collisions. Utilizing new calculations for para-H2 and ortho-H2 collisional rate coefficients with H2 for j as high as 30, we investigate the effects of the new results in standard PDR models with the spectral simulation package Cloudy. We also perform Cloudy models of the Orion Bar and use Radex to explore rotational line ratio diagnostics. The resulting dataset of H2 collisional data should find wide application to other molecular environments. This work was support by Hubble Space Telescope grant HST-AR-13899.001-A and NASA grants NNX15AI61G and NNX16AF09G.

  1. Solar energy distribution over Egypt using cloudiness from Meteosat photos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosalam Shaltout, M.A.; Hassen, A.H.

    1990-01-01

    In Egypt, there are 10 ground stations for measuring the global solar radiation, and five stations for measuring the diffuse solar radiation. Every day at noon, the Meteorological Authority in Cairo receives three photographs of cloudiness over Egypt from the Meteosat satellite, one in the visible, and two in the infra-red bands (10.5-12.5 {mu}m) and (5.7-7.1 {mu}m). The monthly average cloudiness for 24 sites over Egypt are measured and calculated from Meteosat observations during the period 1985-1986. Correlation analysis between the cloudiness observed by Meteosat and global solar radiation measured from the ground stations is carried out. It is foundmore » that, the correlation coefficients are about 0.90 for the simple linear regression, and increase for the second and third degree regressions. Also, the correlation coefficients for the cloudiness with the diffuse solar radiation are about 0.80 for the simple linear regression, and increase for the second and third degree regression. Models and empirical relations for estimating the global and diffuse solar radiation from Meteosat cloudiness data over Egypt are deduced and tested. Seasonal maps for the global and diffuse radiation over Egypt are carried out.« less

  2. A study of the 3D radiative transfer effect in cloudy atmospheres

    NASA Astrophysics Data System (ADS)

    Okata, M.; Teruyuki, N.; Suzuki, K.

    2015-12-01

    Evaluation of the effect of clouds in the atmosphere is a significant problem in the Earth's radiation budget study with their large uncertainties of microphysics and the optical properties. In this situation, we still need more investigations of 3D cloud radiative transer problems using not only models but also satellite observational data.For this purpose, we have developed a 3D-Monte-Carlo radiative transfer code that is implemented with various functions compatible with the OpenCLASTR R-Star radiation code for radiance and flux computation, i.e. forward and backward tracing routines, non-linear k-distribution parameterization (Sekiguchi and Nakajima, 2008) for broad band solar flux calculation, and DM-method for flux and TMS-method for upward radiance (Nakajima and Tnaka 1998). We also developed a Minimum cloud Information Deviation Profiling Method (MIDPM) as a method for a construction of 3D cloud field with MODIS/AQUA and CPR/CloudSat data. We then selected a best-matched radar reflectivity factor profile from the library for each of off-nadir pixels of MODIS where CPR profile is not available, by minimizing the deviation between library MODIS parameters and those at the pixel. In this study, we have used three cloud microphysical parameters as key parameters for the MIDPM, i.e. effective particle radius, cloud optical thickness and top of cloud temperature, and estimated 3D cloud radiation budget. We examined the discrepancies between satellite observed and mode-simulated radiances and three cloud microphysical parameter's pattern for studying the effects of cloud optical and microphysical properties on the radiation budget of the cloud-laden atmospheres.

  3. Success of sky-polarimetric Viking navigation: revealing the chance Viking sailors could reach Greenland from Norway

    NASA Astrophysics Data System (ADS)

    Száz, Dénes; Horváth, Gábor

    2018-04-01

    According to a famous hypothesis, Viking sailors could navigate along the latitude between Norway and Greenland by means of sky polarization in cloudy weather using a sun compass and sunstone crystals. Using data measured in earlier atmospheric optical and psychophysical experiments, here we determine the success rate of this sky-polarimetric Viking navigation. Simulating 1000 voyages between Norway and Greenland with varying cloudiness at summer solstice and spring equinox, we revealed the chance with which Viking sailors could reach Greenland under the varying weather conditions of a 3-week-long journey as a function of the navigation periodicity Δt if they analysed sky polarization with calcite, cordierite or tourmaline sunstones. Examples of voyage routes are also presented. Our results show that the sky-polarimetric navigation is surprisingly successful on both days of the spring equinox and summer solstice even under cloudy conditions if the navigator determined the north direction periodically at least once in every 3 h, independently of the type of sunstone used for the analysis of sky polarization. This explains why the Vikings could rule the Atlantic Ocean for 300 years and could reach North America without a magnetic compass. Our findings suggest that it is not only the navigation periodicity in itself that is important for higher navigation success rates, but also the distribution of times when the navigation procedure carried out is as symmetrical as possible with respect to the time point of real noon.

  4. Success of sky-polarimetric Viking navigation: revealing the chance Viking sailors could reach Greenland from Norway.

    PubMed

    Száz, Dénes; Horváth, Gábor

    2018-04-01

    According to a famous hypothesis, Viking sailors could navigate along the latitude between Norway and Greenland by means of sky polarization in cloudy weather using a sun compass and sunstone crystals. Using data measured in earlier atmospheric optical and psychophysical experiments, here we determine the success rate of this sky-polarimetric Viking navigation. Simulating 1000 voyages between Norway and Greenland with varying cloudiness at summer solstice and spring equinox, we revealed the chance with which Viking sailors could reach Greenland under the varying weather conditions of a 3-week-long journey as a function of the navigation periodicity Δ t if they analysed sky polarization with calcite, cordierite or tourmaline sunstones. Examples of voyage routes are also presented. Our results show that the sky-polarimetric navigation is surprisingly successful on both days of the spring equinox and summer solstice even under cloudy conditions if the navigator determined the north direction periodically at least once in every 3 h, independently of the type of sunstone used for the analysis of sky polarization. This explains why the Vikings could rule the Atlantic Ocean for 300 years and could reach North America without a magnetic compass. Our findings suggest that it is not only the navigation periodicity in itself that is important for higher navigation success rates, but also the distribution of times when the navigation procedure carried out is as symmetrical as possible with respect to the time point of real noon.

  5. Success of sky-polarimetric Viking navigation: revealing the chance Viking sailors could reach Greenland from Norway

    PubMed Central

    Száz, Dénes; Horváth, Gábor

    2018-01-01

    According to a famous hypothesis, Viking sailors could navigate along the latitude between Norway and Greenland by means of sky polarization in cloudy weather using a sun compass and sunstone crystals. Using data measured in earlier atmospheric optical and psychophysical experiments, here we determine the success rate of this sky-polarimetric Viking navigation. Simulating 1000 voyages between Norway and Greenland with varying cloudiness at summer solstice and spring equinox, we revealed the chance with which Viking sailors could reach Greenland under the varying weather conditions of a 3-week-long journey as a function of the navigation periodicity Δt if they analysed sky polarization with calcite, cordierite or tourmaline sunstones. Examples of voyage routes are also presented. Our results show that the sky-polarimetric navigation is surprisingly successful on both days of the spring equinox and summer solstice even under cloudy conditions if the navigator determined the north direction periodically at least once in every 3 h, independently of the type of sunstone used for the analysis of sky polarization. This explains why the Vikings could rule the Atlantic Ocean for 300 years and could reach North America without a magnetic compass. Our findings suggest that it is not only the navigation periodicity in itself that is important for higher navigation success rates, but also the distribution of times when the navigation procedure carried out is as symmetrical as possible with respect to the time point of real noon. PMID:29765673

  6. CIELO-A GIS integrated model for climatic and water balance simulation in islands environments

    NASA Astrophysics Data System (ADS)

    Azevedo, E. B.; Pereira, L. S.

    2003-04-01

    The model CIELO (acronym for "Clima Insular à Escala Local") is a physically based model that simulates the climatic variables in an island using data from a single synoptic reference meteorological station. The reference station "knows" its position in the orographic and dynamic regime context. The domain of computation is a GIS raster grid parameterised with a digital elevation model (DEM). The grid is oriented following the direction of the air masses circulation through a specific algorithm named rotational terrain model (RTM). The model consists of two main sub-models. One, relative to the advective component simulation, assumes the Foehn effect to reproduce the dynamic and thermodynamic processes occurring when an air mass moves through the island orographic obstacle. This makes possible to simulate the air temperature, air humidity, cloudiness and precipitation as influenced by the orography along the air displacement. The second concerns the radiative component as affected by the clouds of orographic origin and by the shadow produced by the relief. The initial state parameters are computed starting from the reference meteorological station across the DEM transept until the sea level at the windward side. Then, starting from the sea level, the model computes the local scale meteorological parameters according to the direction of the air displacement, which is adjusted with the RTM. The air pressure, temperature and humidity are directly calculated for each cell in the computational grid, while several algorithms are used to compute the cloudiness, net radiation, evapotranspiration, and precipitation. The model presented in this paper has been calibrated and validated using data from some meteorological stations and a larger number of rainfall stations located at various elevations in the Azores Islands.

  7. The Continuous Intercomparison of Radiation Codes (CIRC): Phase I Cases

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Mlawer, Eli; Delamere, Jennifer; Shippert, Timothy; Turner, David D.; Miller, Mark A.; Minnis, Patrick; Clough, Shepard; Barker, Howard; Ellingson, Robert

    2007-01-01

    CIRC aspires to be the successor to ICRCCM (Intercomparison of Radiation Codes in Climate Models). It is envisioned as an evolving and regularly updated reference source for GCM-type radiative transfer (RT) code evaluation with the principle goal to contribute in the improvement of RT parameterizations. CIRC is jointly endorsed by DOE's Atmospheric Radiation Measurement (ARM) program and the GEWEX Radiation Panel (GRP). CIRC's goal is to provide test cases for which GCM RT algorithms should be performing at their best, i.e, well characterized clear-sky and homogeneous, overcast cloudy cases. What distinguishes CIRC from previous intercomparisons is that its pool of cases is based on observed datasets. The bulk of atmospheric and surface input as well as radiative fluxes come from ARM observations as documented in the Broadband Heating Rate Profile (BBHRP) product. BBHRP also provides reference calculations from AER's RRTM RT algorithms that can be used to select the most optimal set of cases and to provide a first-order estimate of our ability to achieve radiative flux closure given the limitations in our knowledge of the atmospheric state.

  8. Nanopaleomagnetism of Meteoritic Fe-Ni: the Potential for Time-Resolved Remanence Records within the Cloudy Zone

    NASA Astrophysics Data System (ADS)

    Harrison, R. J.; Bryson, J. F.; Kasama, T.; Church, N. S.; Herrero Albillos, J.; Kronast, F.; Ghidini, M.; Redfern, S. A.; van der Laan, G.; Tyliszczak, T.

    2013-12-01

    Paleomagnetic signals recorded by meteorites provide compelling evidence that the liquid cores of differentiated asteroids generated magnetic dynamo fields. Here we argue that magnetic nanostructures unique to meteoritic Fe-Ni metal are capable of carrying a time-resolved record of asteroid dynamo activity, a prospect that could revolutionise our understanding of the thermochemical conditions of differentiated bodies in the early solar system. Using a combination of high-resolution magnetic imaging techniques (including electron holography, magnetic force microscopy, X-ray photoemission electron microscopy and scanning transmission X-ray microscopy) we reveal the origins of the dramatic changes in magnetic properties that are associated with the transition from kamacite - tetrataenite rim - cloudy zone - plessite, typical of Fe-Ni intergrowths. The cloudy zone is comprised of nanoscale islands of tetrataenite (FeNi) coherently intergrown with a hitherto unobserved soft magnetic phase (Fe3Ni). The tetrataenite island diameter decreases with increasing lateral distance from the tetrataenite rim. Exchange coupling between the hard tetrataenite islands and the soft matrix phase leads to an exchange spring effect that lowers the tetrataenite switching field and causes a systematic variation in microcoercivity throughout the cloudy zone. The cloudy zone displays a complex interlocking magnetic domain pattern caused by uniaxial single domain tetrataenite islands with easy axes distributed along all three of the possible <100> crystallographic orientations. The coarse and intermediate cloudy zones contain a random distribution of all three easy axes. The fine cloudy zone, on the other hand, contains one dominant easy axis direction. This easy axis distribution suggests that strong interaction fields (either magnetic or stress) were present in this region at the time of tetrataenite formation, which likely originated from the neighbouring plessite. The easy axis distribution in the coarse and intermediate cloudy zone indicates a lack of interaction fields present at the time of formation, implying that deviations from randomness could be used to detect the presence of an external (e.g. dynamo) field. Zoned metallic grains within chondritic meteorites originating from the top ~5-10% of a differentiated asteroid may have formed their cloudy zones while the core was generating a dynamo field. In this case, as the cloudy zone formed continuously over a period of 10-100 Ma it had the potential to encode sequential information regarding the dynamo field as the spinodal microstructure developed laterally. Thus the local magnetic structure as a function of position throughout the cloudy zone could relate to the time dependence of an asteroid dynamo field. The experimental and analysis methods presented in this study could, in principle, be used to measure the relative strength (proportion of dominant easy axis) and direction (direction of dominant easy axis) of an asteroid dynamo field over ~100 Ma.

  9. Development of the Microwave Radiative Transfer Program for Cloudy Atmospheres: Applications to DMSP SSM/T Channels.

    DTIC Science & Technology

    1979-12-30

    42.22 122.52 Cloudy Green Bay 0253 Z Wisconsin (WI) 23 Nov 79 44.29 88.08 Cloudy Monterrey 0429 Z Mexico (MEX) 23 Nov 79 25.52 100.12 Cloudy Dodge City...230 250 270 290 - 35 35 Ii 30 30 / 25 / MONTERREY, MEXICO 25 GREEN DAY. WISCONSIN 23 NOV. 79,OOZ,0429Z 23 NOV. 79,OOZ,02 53Z - RAOS i -- RAO 20...bINDOW/ UP1NG2.INT)eDW(NG2.INT)eTHETA(t4),NU(INT)eLv(INT~e ICK~IINT).CK2(lNT).6UVERT(INI).CDVERT(INT) COMMON /MADATA/ PINIINGvINT) .PTIINT) . SEXT (INT

  10. The assessment of UV resources over Northern Eurasia

    NASA Astrophysics Data System (ADS)

    Chubarova, Natalia; Zhdanova, Yekaterina

    2013-05-01

    The spatial and temporal distribution of UV resources was assessed over Northern Eurasia by using RT modeling (8 stream DISORT RT code) with 1×1 degree grid and month resolution. For this purpose a special dataset of main input geophysical parameters (total ozone content, aerosol characteristics, surface UV albedo, and UV cloud modification factor) has been developed. To define the UV resources both erythemally-weighted and vitamin D irradiances were used. In order to better quantify vitamin D irradiance threshold we accounted for a body exposure fraction S as a function of surface effective temperature. The UV resources are defined by using several classes and subclasses: UV deficiency, UV optimum, and UV excess. They were evaluated for clear and typical cloudy conditions for different skin types. We show that for typical cloudy conditions in winter (January) there are only few regions in Europe at the south of Spain (southward 43°N) with conditions of UV optimum for people with skin type 2 and no such conditions for people with skin type 4. In summer (July) UV optimum for skin 2 is observed northward 63°N with a boundary biased towards higher latitudes at the east, while for skin type 4 these conditions are observed over the most territory of Northern Eurasia.

  11. Modeling SOFIA/FORCAST spectra of the classical nova V5568 Sgr with 3D pyCloudy

    NASA Astrophysics Data System (ADS)

    Calvén, Emilia; Helton, L. Andrew; Sankrit, Ravi

    2017-06-01

    We present our first results modelling Nova V5668 Sgr using the pseudo-3D photoionization code pyCloudy (Morisset 2013). V5668 Sgr is a classical nova of the FeII class (Williams et al. 2015; Seach 2015) showing signs of a bipolar flow (Banerjee et al. 2015). We construct a grid of models, which use hour-glass morphologies and a range of C, N, O and Ne abundances, to fit a suite of spectroscopic data in the near and mid-IR obtained between 82 to 556 days after outburst. The spectra were obtained using the FORCAST mid-IR instrument onboard the NASA Stratospheric Observatory for Infrared Astronomy (SOFIA) and the 1.2m near-IR telescope of the Mount Abu Infrared Observatory. Additional photometric data from FORCAST, The STONY BROOK/SMARTS Atlas of (mostly) Southern Novae (Walter et al., 2012) and the American Association of Variable Star Observers (AAVSO) were used to supplement the spectral data to obtain the SED of the nova at different times during its evolution. The work presented here is the initial step towards developing a large database of 1D and 3D models that may be used to derive the elemental abundances and dust properties of classical novae.

  12. Results of a joint NOAA/NASA sounder simulation study

    NASA Technical Reports Server (NTRS)

    Phillips, N.; Susskind, Joel; Mcmillin, L.

    1988-01-01

    This paper presents the results of a joint NOAA and NASA sounder simulation study in which the accuracies of atmospheric temperature profiles and surface skin temperature measuremnents retrieved from two sounders were compared: (1) the currently used IR temperature sounder HIRS2 (High-resolution Infrared Radiation Sounder 2); and (2) the recently proposed high-spectral-resolution IR sounder AMTS (Advanced Moisture and Temperature Sounder). Simulations were conducted for both clear and partial cloud conditions. Data were analyzed at NASA using a physical inversion technique and at NOAA using a statistical technique. Results show significant improvement of AMTS compared to HIRS2 for both clear and cloudy conditions. The improvements are indicated by both methods of data analysis, but the physical retrievals outperform the statistical retrievals.

  13. Aerosol indirect effects - general circulation model intercomparison and evaluation with satellite data

    NASA Astrophysics Data System (ADS)

    Quaas, J.; Ming, Y.; Menon, S.; Takemura, T.; Wang, M.; Penner, J. E.; Gettelman, A.; Lohmann, U.; Bellouin, N.; Boucher, O.; Sayer, A. M.; Thomas, G. E.; McComiskey, A.; Feingold, G.; Hoose, C.; Kristjánsson, J. E.; Liu, X.; Balkanski, Y.; Donner, L. J.; Ginoux, P. A.; Stier, P.; Grandey, B.; Feichter, J.; Sednev, I.; Bauer, S. E.; Koch, D.; Grainger, R. G.; Kirkevåg, A.; Iversen, T.; Seland, Ø.; Easter, R.; Ghan, S. J.; Rasch, P. J.; Morrison, H.; Lamarque, J.-F.; Iacono, M. J.; Kinne, S.; Schulz, M.

    2009-11-01

    Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterises aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth (τa) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is found that the model-simulated influence of aerosols on cloud droplet number concentration (Nd) compares relatively well to the satellite data at least over the ocean. The relationship between τa and liquid water path is simulated much too strongly by the models. This suggests that the implementation of the second aerosol indirect effect mainly in terms of an autoconversion parameterisation has to be revisited in the GCMs. A positive relationship between total cloud fraction (fcld) and τa as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong fcld-τa relationship, our results indicate that none can be identified as a unique explanation. Relationships similar to the ones found in satellite data between τa and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR-τa relationship show a strong positive correlation between τa and fcld. The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of τa, and parameterisation assumptions such as a lower bound on Nd. Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of -1.5±0.5 Wm-2. In an alternative approach, the radiative flux perturbation due to anthropogenic aerosols can be broken down into a component over the cloud-free portion of the globe (approximately the aerosol direct effect) and a component over the cloudy portion of the globe (approximately the aerosol indirect effect). An estimate obtained by scaling these simulated clear- and cloudy-sky forcings with estimates of anthropogenic τa and satellite-retrieved Nd-τa regression slopes, respectively, yields a global, annual-mean aerosol direct effect estimate of -0.4±0.2 Wm-2 and a cloudy-sky (aerosol indirect effect) estimate of -0.7±0.5 Wm-2, with a total estimate of -1.2±0.4 Wm-2.

  14. Aerosol indirect effects ? general circulation model intercomparison and evaluation with satellite data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quaas, Johannes; Ming, Yi; Menon, Surabi

    2010-03-12

    Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterises aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth ({tau}{sub a}) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is foundmore » that the model-simulated influence of aerosols on cloud droplet number concentration (N{sub d}) compares relatively well to the satellite data at least over the ocean. The relationship between {tau}{sub a} and liquid water path is simulated much too strongly by the models. This suggests that the implementation of the second aerosol indirect effect mainly in terms of an autoconversion parameterisation has to be revisited in the GCMs. A positive relationship between total cloud fraction (f{sub cld}) and {tau}{sub a} as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong f{sub cld} - {tau}{sub a} relationship, our results indicate that none can be identified as a unique explanation. Relationships similar to the ones found in satellite data between {tau}{sub a} and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR - {tau}{sub a} relationship show a strong positive correlation between {tau}{sub a} and f{sub cld} The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of {tau}{sub a}, and parameterization assumptions such as a lower bound on N{sub d}. Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of -1.5 {+-} 0.5 Wm{sup -2}. In an alternative approach, the radiative flux perturbation due to anthropogenic aerosols can be broken down into a component over the cloud-free portion of the globe (approximately the aerosol direct effect) and a component over the cloudy portion of the globe (approximately the aerosol indirect effect). An estimate obtained by scaling these simulated clear- and cloudy-sky forcings with estimates of anthropogenic {tau}{sub a} and satellite-retrieved Nd - {tau}{sub a} regression slopes, respectively, yields a global, annual-mean aerosol direct effect estimate of -0.4 {+-} 0.2 Wm{sup -2} and a cloudy-sky (aerosol indirect effect) estimate of -0.7 {+-} 0.5 Wm{sup -2}, with a total estimate of -1.2 {+-} 0.4 Wm{sup -2}.« less

  15. H-, He-like recombination spectra - II. l-changing collisions for He Rydberg states

    NASA Astrophysics Data System (ADS)

    Guzmán, F.; Badnell, N. R.; Williams, R. J. R.; van Hoof, P. A. M.; Chatzikos, M.; Ferland, G. J.

    2017-01-01

    Cosmological models can be constrained by determining primordial abundances. Accurate predictions of the He I spectrum are needed to determine the primordial helium abundance to a precision of <1 per cent in order to constrain big bang nucleosynthesis models. Theoretical line emissivities at least this accurate are needed if this precision is to be achieved. In the first paper of this series, which focused on H I, we showed that differences in l-changing collisional rate coefficients predicted by three different theories can translate into 10 per cent changes in predictions for H I spectra. Here, we consider the more complicated case of He atoms, where low-l subshells are not energy degenerate. A criterion for deciding when the energy separation between l subshells is small enough to apply energy-degenerate collisional theories is given. Moreover, for certain conditions, the Bethe approximation originally proposed by Pengelly & Seaton is not sufficiently accurate. We introduce a simple modification of this theory which leads to rate coefficients which agree well with those obtained from pure quantal calculations using the approach of Vrinceanu et al. We show that the l-changing rate coefficients from the different theoretical approaches lead to differences of ˜10 per cent in He I emissivities in simulations of H II regions using spectral code CLOUDY.

  16. Acclimation of Plant Populations to Shade: Photosynthesis, Respiration, and Carbon Use Efficiency

    NASA Technical Reports Server (NTRS)

    Frantz, Jonathan M.; Bugbee, Bruce

    2005-01-01

    Cloudy days cause an abrupt reduction in daily photosynthetic photon flux (PPF), but we have a poor understanding of how plants acclimate to this change. We used a unique lo-chamber, steady-state, gas-exchange system to continuously measure daily photosynthesis and night respiration of populations of a starch accumulator [tomato (Lycopersicone scukntum Mill. cv. Micro-Tina)] and a sucrose accumulator [lettuce (Latuca sativa L ev. Grand Rapids)] over 42 days. AI1 measurements were done at elevated CO2, (1200micr-/mol) avoid any CO2 limitations and included both shoots and roots. We integrated photosynthesis and respiration measurements separately to determine daily net carbon gain and carbon use efficiency (CUE) as the ratio of daily net C gain to total day-time C fixed over the 42-day period. After 16 to 20 days of growth in constant PPF, plants in some chambers were subjected to an abrupt PPF reduction to simulate shade or a series of cloudy days. The immediate effect and the long term acclimation rate w'ere assessed from canopy quantum yield and carbon use efficiency. The effect of shade on carbon use efficiency and acclimation was much slower than predicted by widely used growth models. It took 12 days for tomato populations to recover their original CUE and lettuce CUE never completely acclimated. Tomatoes, the starch accumulator, acclimated to low light more rapidly than lettuce, the sucrose accumulator. Plant growth models should be modified to include the photosynthesis/respiration imbalance and resulting inefficiency of carbon gain associated with changing PIT conditions on cloudy days.

  17. Observed Spectral Invariant Behavior of Zenith Radiance in the Transition Zone Between Cloud-Free and Cloudy Regions

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Chiu, C.; Wiscombe, W.

    2010-01-01

    The Atmospheric Radiation Measurement Program's (ARM) new Shortwave Spectrometer (SWS) looks straight up and measures zenith radiance at 418 wavelengths between 350 and 2200 nm. Because of its 1-sec sampling resolution, the SWS provides a unique capability to study the transition zone between cloudy and clear sky areas. A surprising spectral invariant behavior is found between ratios of zenith radiance spectra during the transition from cloudy to cloud-free atmosphere. This behavior suggests that the spectral signature of the transition zone is a linear mixture between the two extremes (definitely cloudy and definitely clear). The weighting function of the linear mixture is found to be a wavelength-independent characteristic of the transition zone. It is shown that the transition zone spectrum is fully determined by this function and zenith radiance spectra of clear and cloudy regions. This new finding may help us to better understand and quantify such physical phenomena as humidification of aerosols in the relatively moist cloud environment and evaporation and activation of cloud droplets.

  18. The profile algorithm for microwave delay estimation from water vapor radiometer data

    NASA Technical Reports Server (NTRS)

    Robinson, Steven E.

    1988-01-01

    A new algorithm has been developed for the estimation of tropospheric microwave path delays from water vapor radiometer (WVR) data, which does not require site and weather dependent empirical parameters to produce accuracy better than 0.3 cm of delay. Instead of taking the conventional linear approach, the new algorithm first uses the observables with an emission model to determine an approximate form of the vertical water vapor distribution, which is then explicitly integrated to estimate wet path delays in a second step. The intrinsic accuracy of this algorithm, excluding uncertainties caused by the radiometers and the emission model, has been examined for two channel WVR data using path delays and corresponding simulated observables computed from archived radiosonde data. It is found that annual rms errors for a wide range of sites average 0.18 cm in the absence of clouds, 0.22 cm in cloudy weather, and 0.19 cm overall. In clear weather, the new algorithm's accuracy is comparable to the best that can be obtained from conventional linear algorithms, while in cloudy weather it offers a 35 percent improvement.

  19. Trade-Wind Cloudiness and Climate

    NASA Technical Reports Server (NTRS)

    Randall, David A.

    1997-01-01

    Closed Mesoscale Cellular Convection (MCC) consists of mesoscale cloud patches separated by narrow clear regions. Strong radiative cooling occurs at the cloud top. A dry two-dimensional Bousinesq model is used to study the effects of cloud-top cooling on convection. Wide updrafts and narrow downdrafts are used to indicate the asymmetric circulations associated with the mesoscale cloud patches. Based on the numerical results, a conceptual model was constructed to suggest a mechanism for the formation of closed MCC over cool ocean surfaces. A new method to estimate the radioative and evaporative cooling in the entrainment layer of a stratocumulus-topped boundary layer has been developed. The method was applied to a set of Large-Eddy Simulation (LES) results and to a set of tethered-balloon data obtained during FIRE. We developed a statocumulus-capped marine mixed layer model which includes a parameterization of drizzle based on the use of a predicted Cloud Condensation Nuclei (CCN) number concentration. We have developed, implemented, and tested a very elaborate new stratiform cloudiness parameterization for use in GCMs. Finally, we have developed a new, mechanistic parameterization of the effects of cloud-top cooling on the entrainment rate.

  20. Applying an economical scale-aware PDF-based turbulence closure model in NOAA NCEP GCMs.

    NASA Astrophysics Data System (ADS)

    Belochitski, A.; Krueger, S. K.; Moorthi, S.; Bogenschutz, P.; Cheng, A.

    2017-12-01

    A novel unified representation of sub-grid scale (SGS) turbulence, cloudiness, and shallow convection is being implemented into the NOAA NCEP Global Forecasting System (GFS) general circulation model. The approach, known as Simplified High Order Closure (SHOC), is based on predicting a joint PDF of SGS thermodynamic variables and vertical velocity, and using it to diagnose turbulent diffusion coefficients, SGS fluxes, condensation, and cloudiness. Unlike other similar methods, comparatively few new prognostic variables needs to be introduced, making the technique computationally efficient. In the base version of SHOC it is SGS turbulent kinetic energy (TKE), and in the developmental version — SGS TKE, and variances of total water and moist static energy (MSE). SHOC is now incorporated into a version of GFS that will become a part of the NOAA Next Generation Global Prediction System based around NOAA GFDL's FV3 dynamical core, NOAA Environmental Modeling System (NEMS) coupled modeling infrastructure software, and a set novel physical parameterizations. Turbulent diffusion coefficients computed by SHOC are now used in place of those produced by the boundary layer turbulence and shallow convection parameterizations. Large scale microphysics scheme is no longer used to calculate cloud fraction or the large-scale condensation/deposition. Instead, SHOC provides these quantities. Radiative transfer parameterization uses cloudiness computed by SHOC. An outstanding problem with implementation of SHOC in the NCEP global models is excessively large high level tropical cloudiness. Comparison of the moments of the SGS PDF diagnosed by SHOC to the moments calculated in a GigaLES simulation of tropical deep convection case (GATE), shows that SHOC diagnoses too narrow PDF distributions of total cloud water and MSE in the areas of deep convective detrainment. A subsequent sensitivity study of SHOC's diagnosed cloud fraction (CF) to higher order input moments of the SGS PDF demonstrated that CF is improved if SHOC is provided with correct variances of total water and MSE. Consequently, SHOC was modified to include two new prognostic equations for variances of total water and MSE, and coupled with the Chikira-Sugiyama parameterization of deep convection to include effects of detrainment on the prognostic variances.

  1. A Python Calculator for Supernova Remnant Evolution

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.; Williams, J. E.

    2017-05-01

    A freely available Python code for modeling supernova remnant (SNR) evolution has been created. This software is intended for two purposes: to understand SNR evolution and to use in modeling observations of SNR for obtaining good estimates of SNR properties. It includes all phases for the standard path of evolution for spherically symmetric SNRs. In addition, alternate evolutionary models are available, including evolution in a cloudy ISM, the fractional energy-loss model, and evolution in a hot low-density ISM. The graphical interface takes in various parameters and produces outputs such as shock radius and velocity versus time, as well as SNR surface brightness profile and spectrum. Some interesting properties of SNR evolution are demonstrated using the program.

  2. Atmospheric correction of short-wave hyperspectral imagery using a fast, full-scattering 1DVar retrieval scheme

    NASA Astrophysics Data System (ADS)

    Thelen, J.-C.; Havemann, S.; Taylor, J. P.

    2012-06-01

    Here, we present a new prototype algorithm for the simultaneous retrieval of the atmospheric profiles (temperature, humidity, ozone and aerosol) and the surface reflectance from hyperspectral radiance measurements obtained from air/space-borne, hyperspectral imagers such as the 'Airborne Visible/Infrared Imager (AVIRIS) or Hyperion on board of the Earth Observatory 1. The new scheme, proposed here, consists of a fast radiative transfer code, based on empirical orthogonal functions (EOFs), in conjunction with a 1D-Var retrieval scheme. The inclusion of an 'exact' scattering code based on spherical harmonics, allows for an accurate treatment of Rayleigh scattering and scattering by aerosols, water droplets and ice-crystals, thus making it possible to also retrieve cloud and aerosol optical properties, although here we will concentrate on non-cloudy scenes. We successfully tested this new approach using two hyperspectral images taken by AVIRIS, a whiskbroom imaging spectrometer operated by the NASA Jet Propulsion Laboratory.

  3. Numerical simulation of impurity transport in Lake Baikal during the summer period

    NASA Astrophysics Data System (ADS)

    Tsydenov, Bair O.

    2017-11-01

    The distributions of impurities obtained as a result of numerical modeling on the Srednyaya arm (Selenga River mouth)- Cape Golyi cross-section of Lake Baikal, Siberia, Russia, are presented. The data on the air temperature, relative humidity, atmospheric pressure, humidity, and cloudiness from the Babushkin meteorological station from 01.06.2016 to 30.06.2016 are used as the weather condition in the mathematical model. The results of simulation have shown that the impurities dissolved in water reach the bottom of the Selenga shallow basin of Lake Baikal. As the heat accumulation increases and the river waters warm up, the maximum concentrations of suspended substances tend to remain in the upper layers of the lake.

  4. On the potential influence of ice nuclei on surface-forced marine stratocumulus cloud dynamics

    NASA Astrophysics Data System (ADS)

    Harrington, Jerry Y.; Olsson, Peter Q.

    2001-11-01

    The mixed phase cloudy boundary layer that occurs during off-ice flow in the marine Arctic was simulated in an environment with a strong surface heat flux (nearly 800 W m-2). A two-dimensional, eddy-resolving model coupled to a detailed cloud microphysical model was used to study both liquid phase and mixed phase stratocumulus clouds and boundary layer (BL) dynamics in this environment. Since ice precipitation may be important to BL dynamics, and ice nuclei (IN) concentrations modulate ice precipitation rates, the role of IN in cloud and BL development was explored. The results of several simulations illustrate how mixed phase microphysical processes affect the evolution of the cloudy BL in this environment. In agreement with past studies, BLs with mixed phase clouds had weaker convection, shallower BL depths, and smaller cloud fractions than BLs with clouds restricted to the liquid phase only. It is shown that the weaker BL convection is due to strong ice precipitation. Ice precipitation reduces convective strength directly by stabilizing downdrafts and more indirectly by sensibly heating the BL and inhibiting vertical mixing of momentum thereby reducing surface heat fluxes by as much as 80 W m-2. This feedback between precipitation and surface fluxes was found to have a significant impact on cloud/BL morphology, producing oscillations in convective strength and cloud fraction that did not occur if surface fluxes were fixed at constant values. Increases in IN concentrations in mixed phase clouds caused a more rapid Bergeron-Findeisen process leading to larger precipitation fluxes, reduced convection and lower cloud fraction. When IN were removed from the BL through precipitation, fewer crystals were nucleated at later simulation times leading to progressively weaker precipitation rates, greater cloud fraction, and stronger convective BL eddies.

  5. A Library of ATMO Forward Model Transmission Spectra for Hot Jupiter Exoplanets

    NASA Technical Reports Server (NTRS)

    Goyal, Jayesh M.; Mayne, Nathan; Sing, David K.; Drummond, Benjamin; Tremblin, Pascal; Amundsen, David S.; Evans, Thomas; Carter, Aarynn L.; Spake, Jessica; Baraffe, Isabelle; hide

    2017-01-01

    We present a grid of forward model transmission spectra, adopting an isothermal temperature-pressure profile, alongside corresponding equilibrium chemical abundances for 117 observationally significant hot exoplanets (equilibrium temperatures of 547-2710 K). This model grid has been developed using a 1D radiative-convective-chemical equilibrium model termed ATMO, with up-to-date high-temperature opacities. We present an interpretation of observations of 10 exoplanets, including best-fitting parameters and X(exp 2) maps. In agreement with previous works, we find a continuum from clear to hazy/cloudy atmospheres for this sample of hot Jupiters. The data for all the 10 planets are consistent with subsolar to solar C/O ratio, 0.005 to 10 times solar metallicity and water rather than methane-dominated infrared spectra. We then explore the range of simulated atmospheric spectra for different exoplanets, based on characteristics such as temperature, metallicity, C/O ratio, haziness and cloudiness. We find a transition value for the metallicity between 10 and 50 times solar, which leads to substantial changes in the transmission spectra. We also find a transition value of C/O ratio, from water to carbon species dominated infrared spectra, as found by previous works, revealing a temperature dependence of this transition point ranging from approximately 0.56 to approximately 1-1.3 for equilibrium temperatures from approximately 900 to approximately 2600 K. We highlight the potential of the spectral features of HCN and C2H2 to constrain the metallicities and C/O ratios of planets, using James Webb Space Telescope (JWST) observations. Finally, our entire grid (approximately 460 000 simulations) is publicly available and can be used directly with the JWST simulator PandExo for planning observations.

  6. A library of ATMO forward model transmission spectra for hot Jupiter exoplanets

    NASA Astrophysics Data System (ADS)

    Goyal, Jayesh M.; Mayne, Nathan; Sing, David K.; Drummond, Benjamin; Tremblin, Pascal; Amundsen, David S.; Evans, Thomas; Carter, Aarynn L.; Spake, Jessica; Baraffe, Isabelle; Nikolov, Nikolay; Manners, James; Chabrier, Gilles; Hebrard, Eric

    2018-03-01

    We present a grid of forward model transmission spectra, adopting an isothermal temperature-pressure profile, alongside corresponding equilibrium chemical abundances for 117 observationally significant hot exoplanets (equilibrium temperatures of 547-2710 K). This model grid has been developed using a 1D radiative-convective-chemical equilibrium model termed ATMO, with up-to-date high-temperature opacities. We present an interpretation of observations of 10 exoplanets, including best-fitting parameters and χ2 maps. In agreement with previous works, we find a continuum from clear to hazy/cloudy atmospheres for this sample of hot Jupiters. The data for all the 10 planets are consistent with subsolar to solar C/O ratio, 0.005 to 10 times solar metallicity and water rather than methane-dominated infrared spectra. We then explore the range of simulated atmospheric spectra for different exoplanets, based on characteristics such as temperature, metallicity, C/O ratio, haziness and cloudiness. We find a transition value for the metallicity between 10 and 50 times solar, which leads to substantial changes in the transmission spectra. We also find a transition value of C/O ratio, from water to carbon species dominated infrared spectra, as found by previous works, revealing a temperature dependence of this transition point ranging from ˜0.56 to ˜1-1.3 for equilibrium temperatures from ˜900 to ˜2600 K. We highlight the potential of the spectral features of HCN and C2H2 to constrain the metallicities and C/O ratios of planets, using James Webb Space Telescope (JWST) observations. Finally, our entire grid (˜460 000 simulations) is publicly available and can be used directly with the JWST simulator PandExo for planning observations.

  7. The need for enhanced initial moisture information in simulations of a complex summertime precipitation event

    NASA Technical Reports Server (NTRS)

    Waight, Kenneth T., III; Zack, John W.; Karyampudi, V. Mohan

    1989-01-01

    Initial simulations of the June 28, 1986 Cooperative Huntsville Meteorological Experiment case illustrate the need for mesoscale moisture information in a summertime situation in which deep convection is organized by weak large scale forcing. A methodology is presented for enhancing the initial moisture field from a combination of IR satellite imagery, surface-based cloud observations, and manually digitized radar data. The Mesoscale Atmospheric Simulation Model is utilized to simulate the events of June 28-29. This procedure insures that areas known to have precipitation at the time of initialization will be nearly saturated on the grid scale, which should decrease the time needed by the model to produce the observed Bonnie (a relatively weak hurricane that moved on shore two days before) convection. This method will also result in an initial distribution of model cloudiness (transmissivity) that is very similar to that of the IR satellite image.

  8. On the Analysis of the Climatology of Cloudiness of the Arabian Peninsula

    NASA Astrophysics Data System (ADS)

    Yousef, L. A.; Temimi, M.

    2015-12-01

    This study aims to determine the climatology of cloudiness over the Arabian Peninsula. The determined climatology will assist solar energy resource assessment in the region. The seasonality of cloudiness and its spatial variability will also help guide several cloud seeding operational experiments in the region. Cloud properties from the International Satellite Cloud Climatology Project (ISCCP) database covering the time period from 1983 through 2009 are analyzed. Time series of low, medium, high, and total cloud amounts are investigated, in addition to cloud optical depth and total column water vapor. Initial results show significant decreasing trends in the total and middle cloud amounts, both annually and seasonally, at a 95% confidence interval. The relationship between cloud amounts and climate oscillations known to affect the region is explored. Climate indices exhibiting significant correlations with the total cloud amounts include the Pacific Decadal Oscillation (PDO) index. The study also includes a focus on the United Arab Emirates (UAE), comparing the inferred cloudiness data to in situ rainfall measurements taken from rain gauges across the UAE. To assess the impact of cloudiness on solar power resources in the country, time series of cloud amounts and Direct Normal Irradiance (DNI), obtained from the UAE Solar Atlas, are compared.

  9. Global Aerosol Direct Radiative Effect From CALIOP and C3M

    NASA Technical Reports Server (NTRS)

    Winker, Dave; Kato, Seiji; Tackett, Jason

    2015-01-01

    Aerosols are responsible for the largest uncertainties in current estimates of climate forcing. These uncertainties are due in part to the limited abilities of passive sensors to retrieve aerosols in cloudy skies. We use a dataset which merges CALIOP observations together with other A-train observations to estimate aerosol radiative effects in cloudy skies as well as in cloud-free skies. The results can be used to quantify the reduction of aerosol radiative effects in cloudy skies relative to clear skies and to reduce current uncertainties in aerosol radiative effects.

  10. Global Aerosol Direct Radiative Effect from CALIOP and C3M

    NASA Technical Reports Server (NTRS)

    Winker, Dave; Kato, Seiji; Tackett, Jason

    2015-01-01

    Aerosols are responsible for the largest uncertainties in current estimates of climate forcing. These uncertainties are due in part to the limited abilities of passive sensors to retrieve aerosols in cloudy skies. We use a dataset which merges CALIOP observations together with other A-train observations to estimate aerosol radiative effects in cloudy skies as well as in cloud-free skies. The results can be used to quantify the reduction of aerosol radiative effects in cloudy skies relative to clear skies and to reduce current uncertainties in aerosol radiative effects.

  11. Different insulin concentrations in resuspended vs. unsuspended NPH insulin: Practical aspects of subcutaneous injection in patients with diabetes.

    PubMed

    Lucidi, P; Porcellati, F; Marinelli Andreoli, A; Candeloro, P; Cioli, P; Bolli, G B; Fanelli, C G

    2017-06-06

    This study measured the insulin concentration (Ins [C] ) of NPH insulin in vials and cartridges from different companies after either resuspension (R+) or not (R-; in the clear/cloudy phases of unsuspended NPH). Measurements included Ins [C] in NPH(R+) and in the clear/cloudy phases of NPH(R-), and the time needed to resuspend NPH and time for NPH(R+) to separate again into clear/cloudy parts. In vials of NPH(R+) (assumed to be 100%), Ins [C] in the clear phase of NPH(R-) was<1%, but 230±41% and 234±54% in the cloudy phases of Novo Nordisk and Eli Lilly NPH, respectively. Likewise, in pen cartridges, Ins [C] in the clear phase of NPH(R-) was<1%, but 182±33%, 204±22% and 229±62% in the cloudy phases of Novo, Lilly and Sanofi NPH. Time needed to resuspend NPH (spent in tipping) in vials was brief with both Novo (5±1s) and Lilly NPH (6±1s), but longer with all pen cartridges (50±8s, 40±6s and 30±4s from Novo, Lilly and Sanofi, respectively; P=0.022). Time required for 50% separation into cloudy and clear parts of NPH was longer with Novo (60±7min) vs. Lilly (18±3min) in vials (P=0.021), and affected by temperature, but not by the different diameter sizes of the vials. With pen cartridges, separation into clear and cloudy parts was significantly faster than in vials (P<0.01). Ins [C] in NPH preparations varies depending on their resuspension or not. Thus, subcutaneous injection of the same number of units of NPH in patients with diabetes may deliver different amounts of insulin depending on its prior NPH resuspension. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  12. The effects of aircraft on climate and pollution. Part II: 20-year impacts of exhaust from all commercial aircraft worldwide treated individually at the subgrid scale.

    PubMed

    Jacobson, M Z; Wilkerson, J T; Naiman, A D; Lele, S K

    2013-01-01

    This study examines the 20-year impacts of emissions from all commercial aircraft flights worldwide on climate, cloudiness, and atmospheric composition. Aircraft emissions from each individual flight worldwide were modeled to evolve from the subgrid to grid scale with the global model described and evaluated in Part I of this study. Simulations with and without aircraft emissions were run for 20 years. Aircraft emissions were found to be responsible for -6% of Arctic surface global warming to date, -1.3% of total surface global warming, and -4% of global upper tropospheric warming. Arctic warming due to aircraft slightly decreased Arctic sea ice area. Longer simulations should result in more warming due to the further increase in CO2. Aircraft increased atmospheric stability below cruise altitude and decreased it above cruise altitude. The increase in stability decreased cumulus convection in favor of increased stratiform cloudiness. Aircraft increased total cloud fraction on average. Aircraft increased surface and upper tropospheric ozone by -0.4% and -2.5%, respectively and surface and upper-tropospheric peroxyacetyl nitrate (PAN) by -0.1% and -5%, respectively. Aircraft emissions increased tropospheric OH, decreasing column CO and CH4 by -1.7% and -0.9%, respectively. Aircraft emissions increased human mortality worldwide by -620 (-240 to 4770) deaths per year, with half due to ozone and the rest to particulate matter less than 2.5 micrometers in diameter (PM2.5).

  13. A Modeling Approach to Quantify the Effects of Stomatal Behavior and Mesophyll Conductance on Leaf Water Use Efficiency

    PubMed Central

    Moualeu-Ngangue, Dany P.; Chen, Tsu-Wei; Stützel, Hartmut

    2016-01-01

    Water use efficiency (WUE) is considered as a determinant of yield under stress and a component of crop drought resistance. Stomatal behavior regulates both transpiration rate and net assimilation and has been suggested to be crucial for improving crop WUE. In this work, a dynamic model was used to examine the impact of dynamic properties of stomata on WUE. The model includes sub-models of stomatal conductance dynamics, solute accumulation in the mesophyll, mesophyll water content, and water flow to the mesophyll. Using the instantaneous value of stomatal conductance, photosynthesis, and transpiration rate were simulated using a biochemical model and Penman-Monteith equation, respectively. The model was parameterized for a cucumber leaf and model outputs were evaluated using climatic data. Our simulations revealed that WUE was higher on a cloudy than a sunny day. Fast stomatal reaction to light decreased WUE during the period of increasing light (e.g., in the morning) by up to 10.2% and increased WUE during the period of decreasing light (afternoon) by up to 6.25%. Sensitivity of daily WUE to stomatal parameters and mesophyll conductance to CO2 was tested for sunny and cloudy days. Increasing mesophyll conductance to CO2 was more likely to increase WUE for all climatic conditions (up to 5.5% on the sunny day) than modifications of stomatal reaction speed to light and maximum stomatal conductance. PMID:27379150

  14. Investigation of scene identification algorithms for radiation budget measurements

    NASA Technical Reports Server (NTRS)

    Diekmann, F. J.

    1986-01-01

    The computation of Earth radiation budget from satellite measurements requires the identification of the scene in order to select spectral factors and bidirectional models. A scene identification procedure is developed for AVHRR SW and LW data by using two radiative transfer models. These AVHRR GAC pixels are then attached to corresponding ERBE pixels and the results are sorted into scene identification probability matrices. These scene intercomparisons show that there generally is a higher tendency for underestimation of cloudiness over ocean at high cloud amounts, e.g., mostly cloudy instead of overcast, partly cloudy instead of mostly cloudy, for the ERBE relative to the AVHRR results. Reasons for this are explained. Preliminary estimates of the errors of exitances due to scene misidentification demonstrates the high dependency on the probability matrices. While the longwave error can generally be neglected the shortwave deviations have reached maximum values of more than 12% of the respective exitances.

  15. Imaging spectropolarimetry of cloudy skies

    NASA Astrophysics Data System (ADS)

    Pust, Nathan; Shaw, Joseph A.

    2006-05-01

    The polarization state of atmospheric radiance varies with cloudiness and cloud type. We have developed a dual-field-of-view imaging spectro-polarimeter for measuring atmospheric polarization in five spectral bands from 450 to 700 nm. This instrument improves the acquisition time of past full-sky digital camera designs to 400 ms using liquid crystal variable retarders (LCVRs). The system can be used to measure polarization with either fisheye or telephoto optics, allowing studies of all-sky and target polarization. We present and describe measurements of sky polarization with clear and variably cloudy sky conditions. In clear skies, we observe a slight upward trend of the degree of polarization with wavelength, in agreement with previous observations. Presence of clouds generally reduces both cloudy sky and surrounding clear sky degree of polarization. The polarization measured from a cloud often reflects only the Rayleigh scattering between the instrument and the cloud, but some of our recent data shows partially polarized cloud scattering.

  16. Thermal degradation of cloudy apple juice phenolic constituents.

    PubMed

    De Paepe, D; Valkenborg, D; Coudijzer, K; Noten, B; Servaes, K; De Loose, M; Voorspoels, S; Diels, L; Van Droogenbroeck, B

    2014-11-01

    Although conventional thermal processing is still the most commonly used preservation technique in cloudy apple juice production, detailed knowledge on phenolic compound degradation during thermal treatment is still limited. To evaluate the extent of thermal degradation as a function of time and temperature, apple juice samples were isothermally treated during 7,200s over a temperature range of 80-145 °C. An untargeted metabolomics approach based on liquid chromatography-high resolution mass spectrometry was developed and applied with the aim to find out the most heat labile phenolic constituents in cloudy apple juice. By the use of a high resolution mass spectrometer, the high degree of in-source fragmentation, the quality of deconvolution and the employed custom-made database, it was possible to achieve a high degree of structural elucidation for the thermolabile phenolic constituents. Procyanidin subclass representatives were discovered as the most heat labile phenolic compounds of cloudy apple juice. Copyright © 2014. Published by Elsevier Ltd.

  17. Continuous estimation of evapotranspiration and gross primary productivity from an Unmanned Aerial System

    NASA Astrophysics Data System (ADS)

    Wang, S.; Bandini, F.; Jakobsen, J.; J Zarco-Tejada, P.; Liu, X.; Haugård Olesen, D.; Ibrom, A.; Bauer-Gottwein, P.; Garcia, M.

    2017-12-01

    Model prediction of evapotranspiration (ET) and gross primary productivity (GPP) using optical and thermal satellite imagery is biased towards clear-sky conditions. Unmanned Aerial Systems (UAS) can collect optical and thermal signals at unprecedented very high spatial resolution (< 1 meter) under sunny and cloudy weather conditions. However, methods to obtain model outputs between image acquisitions are still needed. This study uses UAS based optical and thermal observations to continuously estimate daily ET and GPP in a Danish willow forest for an entire growing season of 2016. A hexacopter equipped with multispectral and thermal infrared cameras and a real-time kinematic Global Navigation Satellite System was used. The Normalized Differential Vegetation Index (NDVI) and the Temperature Vegetation Dryness Index (TVDI) were used as proxies for leaf area index and soil moisture conditions, respectively. To obtain continuously daily records between UAS acquisitions, UAS surface temperature was assimilated by the ensemble Kalman filter into a prognostic land surface model (Noilhan and Planton, 1989), which relies on the force-restore method, to simulate the continuous land surface temperature. NDVI was interpolated into daily time steps by the cubic spline method. Using these continuous datasets, a joint ET and GPP model, which combines the Priestley-Taylor Jet Propulsion Laboratory ET model (Fisher et al., 2008; Garcia et al., 2013) and the Light Use Efficiency GPP model (Potter et al., 1993), was applied. The simulated ET and GPP were compared with the footprint of eddy covariance observations. The simulated daily ET has a RMSE of 14.41 W•m-2 and a correlation coefficient of 0.83. The simulated daily GPP has a root mean square error (RMSE) of 1.56 g•C•m-2•d-1 and a correlation coefficient of 0.87. This study demonstrates the potential of UAS based multispectral and thermal mapping to continuously estimate ET and GPP for both sunny and cloudy weather conditions.

  18. Dynamics, thermodynamics, radiation, and cloudiness associated with cumulus-topped marine boundary layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghate, Virendra P.; Miller, Mark

    The overall goal of this project was to improve the understanding of marine boundary clouds by using data collected at the Atmospheric Radiation Measurement (ARM) sites, so that they can be better represented in global climate models (GCMs). Marine boundary clouds are observed regularly over the tropical and subtropical oceans. They are an important element of the Earth’s climate system because they have substantial impact on the radiation budget together with the boundary layer moisture, and energy transports. These clouds also have an impact on large-scale precipitation features like the Inter Tropical Convergence Zone (ITCZ). Because these clouds occur atmore » temporal and spatial scales much smaller than those relevant to GCMs, their effects and the associated processes need to be parameterized in GCM simulations aimed at predicting future climate and energy needs. Specifically, this project’s objectives were to (1) characterize the surface turbulent fluxes, boundary layer thermodynamics, radiation field, and cloudiness associated with cumulus-topped marine boundary layers; (2) explore the similarities and differences in cloudiness and boundary layer conditions observed in the tropical and trade-wind regions; and (3) understand similarities and differences by using a simple bulk boundary layer model. In addition to working toward achieving the project’s three objectives, we also worked on understanding the role played by different forcing mechanisms in maintaining turbulence within cloud-topped boundary layers We focused our research on stratocumulus clouds during the first phase of the project, and cumulus clouds during the rest of the project. Below is a brief description of manuscripts published in peer-reviewed journals that describe results from our analyses.« less

  19. An examination of the effects of explicit cloud water in the UCLA GCM

    NASA Technical Reports Server (NTRS)

    Ose, Tomoaki

    1993-01-01

    The effect of explicit cloud water on the climate simulation by the University of California of Los Angeles GCM is investigated by adding the mixing ratios of cloud ice and cloud liquid water to the prognostic variables of the model. The detrained cloud ice and cloud liquid water are obtained by the microphysical calculation in the Arakawa-Schubert (1974) cumulus scheme. The results are compared with the observations concerned with cloudiness, planetary albedo, OLR, and the dependence of cloud water content on temperature.

  20. Cloud cover classification through simultaneous ground-based measurements of solar and infrared radiation

    NASA Astrophysics Data System (ADS)

    Orsini, Antonio; Tomasi, Claudio; Calzolari, Francescopiero; Nardino, Marianna; Cacciari, Alessandra; Georgiadis, Teodoro

    2002-04-01

    Simultaneous measurements of downwelling short-wave solar irradiance and incoming total radiation flux were performed at the Reeves Nevè glacier station (1200 m MSL) in Antarctica on 41 days from late November 1994 to early January 1995, employing the upward sensors of an albedometer and a pyrradiometer. The downwelling short-wave radiation measurements were analysed following the Duchon and O'Malley [J. Appl. Meteorol. 38 (1999) 132] procedure for classifying clouds, using the 50-min running mean values of standard deviation and the ratio of scaled observed to scaled clear-sky irradiance. Comparing these measurements with the Duchon and O'Malley rectangular boundaries and the local human observations of clouds collected on 17 days of the campaign, we found that the Duchon and O'Malley classification method obtained a success rate of 93% for cirrus and only 25% for cumulus. New decision criteria were established for some polar cloud classes providing success rates of 94% for cirrus, 67% for cirrostratus and altostratus, and 33% for cumulus and altocumulus. The ratios of the downwelling short-wave irradiance measured for cloudy-sky conditions to that calculated for clear-sky conditions were analysed in terms of the Kasten and Czeplak [Sol. Energy 24 (1980) 177] formula together with simultaneous human observations of cloudiness, to determine the empirical relationship curves providing reliable estimates of cloudiness for each of the three above-mentioned cloud classes. Using these cloudiness estimates, the downwelling long-wave radiation measurements (obtained as differences between the downward fluxes of total and short-wave radiation) were examined to evaluate the downwelling long-wave radiation flux normalised to totally overcast sky conditions. Calculations of the long-wave radiation flux were performed with the MODTRAN 3.7 code [Kneizys, F.X., Abreu, L.W., Anderson, G.P., Chetwynd, J.H., Shettle, E.P., Berk, A., Bernstein, L.S., Robertson, D.C., Acharya, P., Rothman, L.S., Selby, J.E.A., Gallery, W.O., Clough, S.A., 1996. In: Abreu, L.W., Anderson, G.P. (Eds.), The MODTRAN 2/3 Report and LOWTRAN 7 MODEL. Contract F19628-91-C.0132, Phillips Laboratory, Geophysics Directorate, PL/GPOS, Hanscom AFB, MA, 261 pp.] for both clear-sky and cloudy-sky conditions, considering various cloud types characterised by different cloud base altitudes and vertical thicknesses. From these evaluations, best-fit curves of the downwelling long-wave radiation flux were defined as a function of the cloud base height for the three polar cloud classes. Using these relationship curves, average estimates of the cloud base height were obtained from the three corresponding sub-sets of long-wave radiation measurements. The relative frequency histograms of the cloud base height defined by examining these three sub-sets were found to present median values of 4.7, 1.7 and 3.6 km for cirrus, cirrostratus/altostratus and cumulus/altocumulus, respectively, while median values of 6.5, 1.8 and 2.9 km were correspondingly determined by analysing only the measurements taken together with simultaneous cloud observations.

  1. (Talk) Investigating The Star Formation Quenching Across Cosmic Time - A Methodology To Select Galaxies Just After The Quenching Of Star Formation

    NASA Astrophysics Data System (ADS)

    Citro, Annalisa; Pozzetti, Lucia; Quai, Salvatore; Moresco, Michele; Vallini, Livia; Cimatti, Andrea

    2017-06-01

    We propose a method aimed at identifing galaxies in the short evolutionary phase in which they quench their star-formation (SF). We rely on high- to low-ionization emission line ratios, which rapidly disappear after the SF halt due to the softening of the UV ionizing radiation. In particular, we focus on [O III] 5007/Halpha and [Ne III] 3869/[O II] 3727, simulating their time evolution by means of the CLOUDY photoionization code. We find that these two emission line ratios are able to trace the quenching on very short time-scales (i.e. 10-80 Myr), depending on if a sharp or a smoother SF quenching is assumed. We adopt the [N II] 6584/[O II] 3727 ratio as metallicity diagnostic to mitigate the metallicity degeneracy which affects our method. Using a Sloan Digital Sky Survey galaxy sample, we identify 11 examples of extreme quenching candidates within the [O III] 5007/Halpha vs. [N II] 6584/[O II] 3727 plane, characterized by faint [Ne III] 3869, blue dust-corrected spectra and blue (u-r) colours, as expected if the quenching occurred in the recent past. Our results also suggest that the observed fractions of quenching candidates can be used to constrain the quenching mechanism at work and its time-scales.

  2. The cloud radiation impact from optics simulation and airborne observation

    NASA Astrophysics Data System (ADS)

    Melnikova, Irina; Kuznetsov, Anatoly; Gatebe, Charles

    2017-02-01

    The analytical approach of inverse asymptotic formulas of the radiative transfer theory is used for solving inverse problems of cloud optics. The method has advantages because it does not impose strict constraints, but it is tied to the desired solution. Observations are accomplished in extended stratus cloudiness, above a homogeneous ocean surface. Data from NASA`s Cloud Absorption Radiometer (CAR) during two airborne experiments (SAFARI-2000 and ARCTAS-2008) were analyzed. The analytical method of inverse asymptotic formulas was used to retrieve cloud optical parameters (optical thickness, single scattering albedo and asymmetry parameter of the phase function) and ground albedo in all 8 spectral channels independently. The method is free from a priori restrictions and there is no links to parameters, and it has been applied to data set of different origin and geometry of observations. Results obtained from different airborne, satellite and ground radiative experiments appeared consistence and showed common features of values of cloud parameters and its spectral dependence (Vasiluev, Melnikova, 2004; Gatebe et al., 2014). Optical parameters, retrieved here, are used for calculation of radiative divergence, reflected and transmitted irradiance and heating rates in cloudy atmosphere, that agree with previous observational data.

  3. A Model and Satellite-Based Analysis of the Tropospheric Ozone Distribution in Clear Versus Convectively Cloudy Conditions

    NASA Technical Reports Server (NTRS)

    Strode, Sarah A.; Douglass, Anne R.; Ziemke, Jerald R.; Manyin, Michael; Nielsen, J. Eric; Oman, Luke D.

    2017-01-01

    Satellite observations of in-cloud ozone concentrations from the Ozone Monitoring Instrument and Microwave Limb Sounder instruments show substantial differences from background ozone concentrations. We develop a method for comparing a free-running chemistry-climate model (CCM) to in-cloud and background ozone observations using a simple criterion based on cloud fraction to separate cloudy and clear-sky days. We demonstrate that the CCM simulates key features of the in-cloud versus background ozone differences and of the geographic distribution of in-cloud ozone. Since the agreement is not dependent on matching the meteorological conditions of a specific day, this is a promising method for diagnosing how accurately CCMs represent the relationships between ozone and clouds, including the lower ozone concentrations shown by in-cloud satellite observations. Since clouds are associated with convection as well as changes in chemistry, we diagnose the tendency of tropical ozone at 400 hPa due to chemistry, convection and turbulence, and large-scale dynamics. While convection acts to reduce ozone concentrations at 400 hPa throughout much of the tropics, it has the opposite effect over highly polluted regions of South and East Asia.

  4. Bioactive compounds and quality parameters of natural cloudy lemon juices.

    PubMed

    Uçan, Filiz; Ağçam, Erdal; Akyildiz, Asiye

    2016-03-01

    In this study, bioactive compounds (phenolic and carotenoid) and some quality parameters (color, browning index and hydroxymethylfurfural (HMF)) of natural cloudy lemon juice, pasteurized (90 °C/15 s) and storage stability of concentrated lemon juice (-25 °C/180 days) were carried out. Fifteen phenolic compounds were determined in the lemon juice and the most abounded phenolic compounds were hesperidin, eriocitrin, chlorogenic acid and neoeriocitrin. In generally, phenolic compound concentrations of lemon juice samples increased after the pasteurization treatment. Four carotenoid compounds (β-carotene, β-cryptoxanthin, lutein and zeaxanthin) were detected in natural cloudy lemon juice. Lutein and β-cryptoxanthin were the most abounded carotenoid compounds in the lemon juice. Color values of the lemon juices were not affected by processing and storage periods. HMF and browning index of the lemon juices increased with concentration and storage. According to the results, storing at -25 °C was considered as sufficient for acceptable quality limits of natural cloudy lemon juice.

  5. Minimizing quality changes of cloudy apple juice: The use of kiwifruit puree and high pressure homogenization.

    PubMed

    Yi, Junjie; Kebede, Biniam; Kristiani, Kristiani; Grauwet, Tara; Van Loey, Ann; Hendrickx, Marc

    2018-05-30

    Cloud loss, enzymatic browning, and flavor changes are important quality defects of cloudy fruit juices determining consumer acceptability. The development of clean label options to overcome such quality problems is currently of high interest. Therefore, this study investigated the effect of kiwifruit puree (clean label ingredient) and high pressure homogenization on quality changes of cloudy apple juice using a multivariate approach. The use of kiwifruit puree addition and high pressure homogenization resulted in a juice with improved uniformity and cloud stability by reducing particle size and increasing viscosity and yield stress (p < 0.01). Furthermore, kiwifruit puree addition reduced enzymatic browning (ΔE ∗  < 3), due to the increased ascorbic acid and contributed to a more saturated and bright yellow color, a better taste balance, and a more fruity aroma of juice. This work demonstrates that clean label options to control quality degradation of cloudy fruit juice might offer new opportunities. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Spectral Invariant Behavior of Zenith Radiance Around Cloud Edges Observed by ARM SWS

    NASA Technical Reports Server (NTRS)

    Marshak, A.; Knyazikhin, Y.; Chiu, J. C.; Wiscombe, W. J.

    2009-01-01

    The ARM Shortwave Spectrometer (SWS) measures zenith radiance at 418 wavelengths between 350 and 2170 nm. Because of its 1-sec sampling resolution, the SWS provides a unique capability to study the transition zone between cloudy and clear sky areas. A spectral invariant behavior is found between ratios of zenith radiance spectra during the transition from cloudy to cloud-free. This behavior suggests that the spectral signature of the transition zone is a linear mixture between the two extremes (definitely cloudy and definitely clear). The weighting function of the linear mixture is a wavelength-independent characteristic of the transition zone. It is shown that the transition zone spectrum is fully determined by this function and zenith radiance spectra of clear and cloudy regions. An important result of these discoveries is that high temporal resolution radiance measurements in the clear-to-cloud transition zone can be well approximated by lower temporal resolution measurements plus linear interpolation.

  7. Recent changes in solar irradiance and infrared irradiance related with air temperature and cloudiness at the King Sejong Station, Antarctica

    NASA Astrophysics Data System (ADS)

    Jung, Y.; Kim, J.; Cho, H.; Lee, B.

    2006-12-01

    The polar region play a critical role in the surface energy balance and the climate system of the Earth. The important question in the region is that what is the role of the Antarctic atmospheric heat sink of global climate. Thus, this study shows the trends of global solar irradiance, infrared irradiance, air temperature and cloudiness measured at the King Sejong station, Antarctica, during the period of 1996-2004 and determines their relationship and variability of the surface energy balance. Annual average of solar radiation and cloudiness is 81.8 Wm-2 and 6.8 oktas and their trends show the decrease of -0.24 Wm-2yr-1(-0.30 %yr-1) and 0.02 oktas yr-1(0.30 %yr-1). The change of solar irradiance is directly related to change of cloudiness and decrease of solar irradiance presents radiative cooling at the surface. Monthly mean infrared irradiance, air temperature and specific humidity shows the decrease of -2.11 Wm^{- 2}yr-1(-0.75 %yr-1), -0.07 'Cyr-1(-5.15 %yr-1) and -0.044 gkg-1yr-1(-1.42 %yr-1), respectively. Annual average of the infrared irradiance is 279.9 Wm-2 and correlated with the air temperature, specific humidity and cloudiness. A multiple regression model for estimation of the infrared irradiance using the components has been developed. Effects of the components on the infrared irradiance changes show 52 %, 19 % and 10 % for air temperature, specific humidity and cloudiness, respectively. Among the components, air temperature has a great influence on infrared irradiance. Despite the increase of cloudiness, the decrease in the infrared irradiance is due to the decrease of air temperature and specific humidity which have a cooling effect. Therefore, the net radiation of the surface energy balance shows radiative cooling of negative 11-24 Wm^{- 2} during winter and radiative warming of positive 32-83 Wm-2 during the summer. Thus, the amount of shortage and surplus at the surface is mostly balanced by turbulent flux of sensible and latent heat.

  8. Anomalous Decimeter Radio Noise from the Region of the Atmospheric Front: I. Characteristics of the Detected Radio Noise and Meteorological Parameters of the Frontal Cloudiness

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.; Mareev, E. A.

    2018-03-01

    An extraordinary experimental fact is presented and analyzed, namely, a rather intense broadband radio noise detected during the passage of an atmospheric front through the field of view of UHF antennas. Local atmospheric properties and possible sources of the extraordinary noise, including the thermal noise from cloudiness and extra-atmospheric sources, are considered. A conclusion is made about the presence of an additional nonthermal source of radio noise in the frontal cloudiness. According to the proposed hypothesis, these are multiple electric microdicharges on hydrometeors in the convective cloud.

  9. Studies in the use of cloud type statistics in mission simulation

    NASA Technical Reports Server (NTRS)

    Fowler, M. G.; Willand, J. H.; Chang, D. T.; Cogan, J. L.

    1974-01-01

    A study to further improve NASA's global cloud statistics for mission simulation is reported. Regional homogeneity in cloud types was examined; most of the original region boundaries defined for cloud cover amount in previous studies were supported by the statistics on cloud types and the number of cloud layers. Conditionality in cloud statistics was also examined with special emphasis on temporal and spatial dependencies, and cloud type interdependence. Temporal conditionality was found up to 12 hours, and spatial conditionality up to 200 miles; the diurnal cycle in convective cloudiness was clearly evident. As expected, the joint occurrence of different cloud types reflected the dynamic processes which form the clouds. Other phases of the study improved the cloud type statistics for several region and proposed a mission simulation scheme combining the 4-dimensional atmospheric model, sponsored by MSFC, with the global cloud model.

  10. Ensemble formulation of surface fluxes and improvement in evapotranspiration and cloud parameterizations in a GCM. [General Circulation Model

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Smith, W. E.

    1984-01-01

    The influence of some modifications to the parameters of the current general circulation model (GCM) is investigated. The aim of the modifications was to eliminate strong occasional bursts of oscillations in planetary boundary layer (PBL) fluxes. Smoothly varying bulk aerodynamic friction and heat transport coefficients were found by ensemble averaging of the PBL fluxes in the current GCM. A comparison was performed of the simulations of the modified model and the unmodified model. The comparison showed that the surface fluxes and cloudiness in the modified model simulations were much more accurate. The planetary albedo in the model was also realistic. Weaknesses persisted in the models positioning of the Inter-tropical convergence zone (ICTZ) and in the temperature estimates for polar regions. A second simulation of the model following reparametrization of the cloud data showed improved results and these are described in detail.

  11. Effects of cloudiness on global and diffuse UV irradiance in a high-mountain area

    NASA Astrophysics Data System (ADS)

    Blumthaler, M.; Ambach, W.; Salzgeber, M.

    1994-03-01

    At the high-mountain station Jungfraujoch (3576 m a.s.l., Switzerland), measurements of the radiation fluxes were made during 16 periods of six to eight weeks by means of a Robertson—Berger sunburn meter (UVB data), an Eppley UVA radiometer and an Eppley pyranometer. Cloudiness, opacity and altitude of clouds were recorded at 30-minute intervals. A second set of instruments was employed for separate measurement of the diffuse radiation fluxes using shadow bands. The global and diffuse UVA- and UVB radiation fluxes change less with cloudiness than the corresponding total radiation fluxes. When the sun is covered by clouds, the global UVA- and UVB radiation fluxes are also affected less than the global total radiation flux. The roughly equal influence of cloudiness on the UVA- and UVB radiation fluxes suggests that the reduction is influenced more by scattering than by ozone. Also, the share of diffuse irradiance in global irradiance is considerably higher for UVA- and UVB irradiance than for total irradiance. At 50° solar elevation and 0/10 cloudiness, the share is 39% for UVB irradiance, 34% for UVA irradiance and 11% for total irradiance. The increased aerosol turbidity after the eruptions of El Chichon and Pinatubo has caused a significant increase in diffuse total irradiance but has not produced any significant changes in diffuse UVA- and UVB irradiances.

  12. Non-conservative evolution in Algols: where is the matter?

    NASA Astrophysics Data System (ADS)

    Deschamps, R.; Braun, K.; Jorissen, A.; Siess, L.; Baes, M.; Camps, P.

    2015-05-01

    Context. There is indirect evidence of non-conservative evolutions in Algols. However, the systemic mass-loss rate is poorly constrained by observations and generally set as a free parameter in binary-star evolution simulations. Moreover, systemic mass loss may lead to observational signatures that still need to be found. Aims: Within the "hotspot" ejection mechanism, some of the material that is initially transferred from the companion star via an accretion stream is expelled from the system due to the radiative energy released on the gainer's surface by the impacting material. The objective of this paper is to retrieve observable quantities from this process and to compare them with observations. Methods: We investigate the impact of the outflowing gas and the possible presence of dust grains on the spectral energy distribution (SED). We used the 1D plasma code Cloudy and compared the results with the 3D Monte-Carlo radiative transfer code Skirt for dusty simulations. The circumbinary mass-distribution and binary parameters were computed with state-of-the-art binary calculations done with the Binstar evolution code. Results: The outflowing material reduces the continuum flux level of the stellar SED in the optical and UV. Because of the time-dependence of this effect, it may help to distinguish between different ejection mechanisms. If present, dust leads to observable infrared excesses, even with low dust-to-gas ratios, and traces the cold material at large distances from the star. By searching for this dust emission in the WISE catalogue, we found a small number of Algols showing infrared excesses, among which the two rather surprising objects SX Aur and CZ Vel. We find that some binary B[e] stars show the same strong Balmer continuum as we predict with our models. However, direct evidence of systemic mass loss is probably not observable in genuine Algols, since these systems no longer eject mass through the hotspot mechanism. Furthermore, owing to its high velocity, the outflowing material dissipates in a few hundred years. If hot enough, the hotspot may produce highly ionised species, such as Si iv, and observable characteristics that are typical of W Ser systems. Conclusions: If present, systemic mass loss leads to clear observational imprints. These signatures are not to be found in genuine Algols but in the closely related β Lyraes, W Serpentis stars, double periodic variables, symbiotic Algols, and binary B[e] stars. We emphasise the need for further observations of such objects where systemic mass loss is most likely to occur. Appendices are available in electronic form at http://www.aanda.org

  13. Multi-objective optimization for evaluation of simulation fidelity for precipitation, cloudiness and insolation in regional climate models

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2016-12-01

    Precipitation is one of the most important climate variables that are taken into account in studying regional climate. Nevertheless, how precipitation will respond to a changing climate and even its mean state in the current climate are not well represented in regional climate models (RCMs). Hence, comprehensive and mathematically rigorous methodologies to evaluate precipitation and related variables in multiple RCMs are required. The main objective of the current study is to evaluate the joint variability of climate variables related to model performance in simulating precipitation and condense multiple evaluation metrics into a single summary score. We use multi-objective optimization, a mathematical process that provides a set of optimal tradeoff solutions based on a range of evaluation metrics, to characterize the joint representation of precipitation, cloudiness and insolation in RCMs participating in the North American Regional Climate Change Assessment Program (NARCCAP) and Coordinated Regional Climate Downscaling Experiment-North America (CORDEX-NA). We also leverage ground observations, NASA satellite data and the Regional Climate Model Evaluation System (RCMES). Overall, the quantitative comparison of joint probability density functions between the three variables indicates that performance of each model differs markedly between sub-regions and also shows strong seasonal dependence. Because of the large variability across the models, it is important to evaluate models systematically and make future projections using only models showing relatively good performance. Our results indicate that the optimized multi-model ensemble always shows better performance than the arithmetic ensemble mean and may guide reliable future projections.

  14. Operational data fusion framework for building frequent Landsat-like imagery in a cloudy region

    USDA-ARS?s Scientific Manuscript database

    An operational data fusion framework is built to generate dense time-series Landsat-like images for a cloudy region by fusing Moderate Resolution Imaging Spectroradiometer (MODIS) data products and Landsat imagery. The Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) is integrated in ...

  15. Proceedings of the 14th International Conference on the Numerical Simulation of Plasmas

    NASA Astrophysics Data System (ADS)

    Partial Contents are as follows: Numerical Simulations of the Vlasov-Maxwell Equations by Coupled Particle-Finite Element Methods on Unstructured Meshes; Electromagnetic PIC Simulations Using Finite Elements on Unstructured Grids; Modelling Travelling Wave Output Structures with the Particle-in-Cell Code CONDOR; SST--A Single-Slice Particle Simulation Code; Graphical Display and Animation of Data Produced by Electromagnetic, Particle-in-Cell Codes; A Post-Processor for the PEST Code; Gray Scale Rendering of Beam Profile Data; A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers; 3-D Electromagnetic PIC Simulation on the NRL Connection Machine; Plasma PIC Simulations on MIMD Computers; Vlasov-Maxwell Algorithm for Electromagnetic Plasma Simulation on Distributed Architectures; MHD Boundary Layer Calculation Using the Vortex Method; and Eulerian Codes for Plasma Simulations.

  16. Validation of Nimbus-7 temperature-humidity infrared radiometer estimates of cloud type and amount

    NASA Technical Reports Server (NTRS)

    Stowe, L. L.

    1982-01-01

    Estimates of clear and low, middle and high cloud amount in fixed geographical regions approximately (160 km) squared are being made routinely from 11.5 micron radiance measurements of the Nimbus-7 Temperature-Humidity Infrared Radiometer (THIR). The purpose of validation is to determine the accuracy of the THIR cloud estimates. Validation requires that a comparison be made between the THIR estimates of cloudiness and the 'true' cloudiness. The validation results reported in this paper use human analysis of concurrent but independent satellite images with surface meteorological and radiosonde observations to approximate the 'true' cloudiness. Regression and error analyses are used to estimate the systematic and random errors of THIR derived clear amount.

  17. An energy balance climate model with cloud feedbacks

    NASA Technical Reports Server (NTRS)

    Roads, J. O.; Vallis, G. K.

    1984-01-01

    The present two-level global climate model, which is based on the atmosphere-surface energy balance, includes physically based parameterizations for the exchange of heat and moisture across latitude belts and between the surface and the atmosphere, precipitation and cloud formation, and solar and IR radiation. The model field predictions obtained encompass surface and atmospheric temperature, precipitation, relative humidity, and cloudiness. In the model integrations presented, it is noted that cloudiness is generally constant with changing temperature at low latitudes. High altitude cloudiness increases with temperature, although the cloud feedback effect on the radiation field remains small because of compensating effects on thermal and solar radiation. The net global feedback by the cloud field is negative, but small.

  18. Cloud-Top Entrainment in Stratocumulus Clouds

    NASA Astrophysics Data System (ADS)

    Mellado, Juan Pedro

    2017-01-01

    Cloud entrainment, the mixing between cloudy and clear air at the boundary of clouds, constitutes one paradigm for the relevance of small scales in the Earth system: By regulating cloud lifetimes, meter- and submeter-scale processes at cloud boundaries can influence planetary-scale properties. Understanding cloud entrainment is difficult given the complexity and diversity of the associated phenomena, which include turbulence entrainment within a stratified medium, convective instabilities driven by radiative and evaporative cooling, shear instabilities, and cloud microphysics. Obtaining accurate data at the required small scales is also challenging, for both simulations and measurements. During the past few decades, however, high-resolution simulations and measurements have greatly advanced our understanding of the main mechanisms controlling cloud entrainment. This article reviews some of these advances, focusing on stratocumulus clouds, and indicates remaining challenges.

  19. Properties of PSCs and Cirrus Determined from AVHRR Data

    NASA Technical Reports Server (NTRS)

    Hervig, Mark; Pagan, Kathy; Foschi, Patricia G.

    1999-01-01

    Polar stratospheric clouds (PSCS) and cirrus have been investigated using thermal emission measurements at 10.8 and 12 micrometers wavelength (channels 4 and 5) from the Advanced Very High Resolution Radiometer (AVHRR). The AVHRR signal was evaluated from a theoretical basis to understand the emission from clear and cloudy skies, and models were developed to simulate the AVHRR signal. Signal simulations revealed that nitric acid PSCs are invisible to AVHRR, while ice PSCs and cirrus are readily detectable. Methods were developed to retrieve cloud optical depths, average temperatures, average effective radii, and ice water paths, from AVHRR channels 4 and 5. Properties of ice PSCs retrieved from AVHRR were compared to values derived from coincident radiosondes and from the Polar Ozone and Aerosol Measurement II instrument, showing good agreement.

  20. How Well Can Infrared Sounders Observe the Atmosphere and Surface Through Clouds?

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2010-01-01

    Infrared sounders, such as the Atmospheric Infrared Sounder (AIRS), the Infrared Atmospheric Sounding Interferometer (IASI), and the Cross-track Infrared sounder (CrIS), have a cloud-impenetrable disadvantage in observing the atmosphere and surface under opaque cloudy conditions. However, recent studies indicate that hyperspectral, infrared sounders have the ability to detect cloud effective-optical and microphysical properties and to penetrate optically thin clouds in observing the atmosphere and surface to a certain degree. We have developed a retrieval scheme dealing with atmospheric conditions with cloud presence. This scheme can be used to analyze the retrieval accuracy of atmospheric and surface parameters under clear and cloudy conditions. In this paper, we present the surface emissivity results derived from IASI global measurements under both clear and cloudy conditions. The accuracy of surface emissivity derived under cloudy conditions is statistically estimated in comparison with those derived under clear sky conditions. The retrieval error caused by the clouds is shown as a function of cloud optical depth, which helps us to understand how well infrared sounders can observe the atmosphere and surface through clouds.

  1. Radiative effects of biomass burning aerosols and cloudiness on seasonal carbon cycle in the Amazon region

    NASA Astrophysics Data System (ADS)

    Moreira, D. S.; Longo, K.; Freitas, S.; Mercado, L. M.; Miller, J. B.; Rosario, N. M. E. D.; Gatti, L.; Yamasoe, M. A.

    2017-12-01

    The Amazon region is characterized by high cloudiness, mainly due to convective clouds during most of the year due to the high humidity, and heat availability. However, during the Austral winter, the northward movement of the inter-tropical convergence zone (ITCZ) from its climatological position, significantly reducing cloudiness and precipitation, facilitating vegetation fires. Consequently, during these dry months, biomass burning aerosols contribute to relatively high values of aerosol optical depth (AOD) in Amazonia, typically exceeding 1.0 in the 550 nm wavelength. Both clouds and aerosols scatter solar radiation, reducing the direct irradiance and increasing the diffuse fraction that reaches the surface, decreasing near surface temperature and increasing photosynthetically active radiation (PAR) availability. This, in turn, affects energy and CO2 fluxes within the vegetation canopy. We applied an atmospheric model fully coupled to terrestrial carbon cycle model to assess the relative impact of biomass burning aerosols and clouds on CO2 fluxes in the Amazon region. Our results indicate that during most of the year, gross primary productivity (GPP) is high mainly due to high soil moisture and high values of the diffuse fraction of solar irradiation due to cloudiness. Therefore, heterotrophic and autotrophic respiration are both high, increasing the NEE values (i.e. reducing the net land sink). On the other hand, during the dry season, with a significant reduction of cloudiness, the biomass burning aerosol is mainly responsible for the increase in the diffuse fraction of solar irradiation and the GPP of the forest. However, the low soil moisture during the dry season, especially in the eastern Amazon, reduces heterotrophic and autotrophic respiration and thus compensates for reduced GPP compared to the wet season. Different reasons, an anthropogenic one (human induced fires during the dry season) and a natural one (cloudiness), lead to a somewhat stable value of NEE all year long in Amazonia.

  2. Impact of aerosols and clouds on decadal trends in all-sky solar radiation over the Netherlands (1966-2015)

    NASA Astrophysics Data System (ADS)

    Boers, Reinout; Brandsma, Theo; Pier Siebesma, A.

    2017-07-01

    A 50-year hourly data set of global shortwave radiation, cloudiness and visibility over the Netherlands was used to quantify the contribution of aerosols and clouds to the trend in yearly-averaged all-sky radiation (1.81 ± 1.07 W m-2 decade-1). Yearly-averaged clear-sky and cloud-base radiation data show large year-to-year fluctuations caused by yearly changes in the occurrence of clear and cloudy periods and cannot be used for trend analysis. Therefore, proxy clear-sky and cloud-base radiations were computed. In a proxy analysis hourly radiation data falling within a fractional cloudiness value are fitted by monotonic increasing functions of solar zenith angle and summed over all zenith angles occurring in a single year to produce an average. Stable trends can then be computed from the proxy radiation data. A functional expression is derived whereby the trend in proxy all-sky radiation is a linear combination of trends in fractional cloudiness, proxy clear-sky radiation and proxy cloud-base radiation. Trends (per decade) in fractional cloudiness, proxy clear-sky and proxy cloud-base radiation were, respectively, 0.0097 ± 0.0062, 2.78 ± 0.50 and 3.43 ± 1.17 W m-2. To add up to the all-sky radiation the three trends have weight factors, namely the difference between the mean cloud-base and clear-sky radiation, the clear-sky fraction and the fractional cloudiness, respectively. Our analysis clearly demonstrates that all three components contribute significantly to the observed trend in all-sky radiation. Radiative transfer calculations using the aerosol optical thickness derived from visibility observations indicate that aerosol-radiation interaction (ARI) is a strong candidate to explain the upward trend in the clear-sky radiation. Aerosol-cloud interaction (ACI) may have some impact on cloud-base radiation, but it is suggested that decadal changes in cloud thickness and synoptic-scale changes in cloud amount also play an important role.

  3. Euler-Lagrange Simulations of Shock Wave-Particle Cloud Interaction

    NASA Astrophysics Data System (ADS)

    Koneru, Rahul; Rollin, Bertrand; Ouellet, Frederick; Park, Chanyoung; Balachandar, S.

    2017-11-01

    Numerical experiments of shock interacting with an evolving and fixed cloud of particles are performed. In these simulations we use Eulerian-Lagrangian approach along with state-of-the-art point-particle force and heat transfer models. As validation, we use Sandia Multiphase Shock Tube experiments and particle-resolved simulations. The particle curtain upon interaction with the shock wave is expected to experience Kelvin-Helmholtz (KH) and Richtmyer-Meshkov (RM) instabilities. In the simulations evolving the particle cloud, the initial volume fraction profile matches with that of Sandia Multiphase Shock Tube experiments, and the shock Mach number is limited to M =1.66. Measurements of particle dispersion are made at different initial volume fractions. A detailed analysis of the influence of initial conditions on the evolution of the particle cloudis presented. The early time behavior of the models is studied in the fixed bed simulations at varying volume fractions and shock Mach numbers.The mean gas quantities are measured in the context of 1-way and 2-way coupled simulations. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, Contract No. DE-NA0002378.

  4. Effects of Cloudiness on the Daily and Annual Radiation Balance: Elaboration on the Shartwave and Longwave Radiation

    NASA Astrophysics Data System (ADS)

    Malek, E.

    2007-12-01

    Clouds are visible masses of condensed droplets and frozen crystals of water in the atmosphere above the Earth. They make changes in the energy balance at local, regional, and planetary scales. They affect the climate by positive and negative feedback. To study these effects at local scale, we set up a radiation station which uses two CM21 Kipp & Zonen pyranometers (one inverted), and two CG1 Kipp & Zonen pyrgeometers (one inverted) in a semi-arid mountainous valley in Logan, Utah, U.S.A. The pyranometers and pyrgeometers were ventilated using four CV2 Kipp & Zonen ventilation systems. Ventilation of pyranometers and pyrgeometers prevents dew and frost and snow accumulation which otherwise would disturb the measurement. All sensors were installed at about 3 m above the ground, which is covered with natural vegetation during the growing season (May - September). The incoming (Rsi) and outgoing (Rso) solar or shortwave radiation, the incoming (Rli, atmospheric) and outgoing (Rlo, terrestrial) longwave radiation, along with the 2-m air temperature, humidity, and pressure have been continuously measured since 1995. We also measured the 3-m wind speed and direction, the surface temperature (using an IR thermometer) and precipitation (using a heated rain gauge). These parameters have been measured every 2 seconds and averaged into 20 minutes. For this study we chose three days: 6 April (a partially cloudy day), 29 July (a cloudless day), and 29 November (an overcast day), 2005, along with continuous study throughout the year 2005. We developed an algorithm for evaluation of cloudless-sky incoming (atmospheric) longwave radiation. Equations for cloudless-sky incoming shortwave and atmospheric longwave radiation were applied to compare the cloud-free measurements with the actual ones. Cloudless - measured incoming shortwave (solar) radiation is an indication of how much less radiation was received due to cloudiness (if any). Measured - cloudless incoming longwave (atmospheric) radiation shows the cloud (if any) contribution to the radiation budget. The results indicate that for the partial cloudy day of 6 April, 2005, cloudiness caused less shortwave radiation 23.29 - 13.76 = 9.53 MJ m-2 d-1 received at the surface. On the same day cloud contributed an additional radiation of 25.44 - 23.44 = 2.00 MJ m-2 d-1. On 29 November, 2005, these values were 9.37 - 1.98 = 7.39 and 28.82 - 23.86 = 4.96 MJ m-2 d-1, respectively. On the annual basis, the 2005 cloudiness caused a reduction of 7279 - 5800 = 1479 MJ m-2 y-1 for the shortwave radiation, while the additional longwave radiation due to cloudiness amounted to 9976 - 9573 = 403 MJ m-2 y-1. The cloudiness in 2005 caused a negative feedback on the climate in this valley.

  5. An assessment of multibody simulation tools for articulated spacecraft

    NASA Technical Reports Server (NTRS)

    Man, Guy K.; Sirlin, Samuel W.

    1989-01-01

    A survey of multibody simulation codes was conducted in the spring of 1988, to obtain an assessment of the state of the art in multibody simulation codes from the users of the codes. This survey covers the most often used articulated multibody simulation codes in the spacecraft and robotics community. There was no attempt to perform a complete survey of all available multibody codes in all disciplines. Furthermore, this is not an exhaustive evaluation of even robotics and spacecraft multibody simulation codes, as the survey was designed to capture feedback on issues most important to the users of simulation codes. We must keep in mind that the information received was limited and the technical background of the respondents varied greatly. Therefore, only the most often cited observations from the questionnaire are reported here. In this survey, it was found that no one code had both many users (reports) and no limitations. The first section is a report on multibody code applications. Following applications is a discussion of execution time, which is the most troublesome issue for flexible multibody codes. The representation of component flexible bodies, which affects both simulation setup time as well as execution time, is presented next. Following component data preparation, two sections address the accessibility or usability of a code, evaluated by considering its user interface design and examining the overall simulation integrated environment. A summary of user efforts at code verification is reported, before a tabular summary of the questionnaire responses. Finally, some conclusions are drawn.

  6. The neuron net method for processing the clear pixels and method of the analytical formulas for processing the cloudy pixels of POLDER instrument images

    NASA Astrophysics Data System (ADS)

    Melnikova, I.; Mukai, S.; Vasilyev, A.

    Data of remote measurements of reflected radiance with the POLDER instrument on board of ADEOS satellite are used for retrieval of the optical thickness, single scattering albedo and phase function parameter of cloudy and clear atmosphere. The method of perceptron neural network that from input values of multiangle radiance and Solar incident angle allows to obtain surface albedo, the optical thickness, single scattering albedo and phase function parameter in case of clear sky. Two last parameters are determined as optical average for atmospheric column. The calculation of solar radiance with using the MODTRAN-3 code with taking into account multiple scattering is accomplished for neural network learning. All mentioned parameters were randomly varied on the base of statistical models of possible measured parameters variation. Results of processing one frame of remote observation that consists from 150,000 pixels are presented. The methodology elaborated allows operative determining optical characteristics as cloudy as clear atmosphere. Further interpretation of these results gives the possibility to extract the information about total contents of atmospheric aerosols and absorbing gases in the atmosphere and create models of the real cloudiness An analytical method of interpretation that based on asymptotic formulas of multiple scattering theory is applied to remote observations of reflected radiance in case of cloudy pixel. Details of the methodology and error analysis were published and discussed earlier. Here we present results of data processing of pixel size 6x6 km In many studies the optical thickness is evaluated earlier in the assumption of the conservative scattering. But in case of true absorption in clouds the large errors in parameter obtained are possible. The simultaneous retrieval of two parameters at every wavelength independently is the advantage comparing with earlier studies. The analytical methodology is based on the transfer theory asymptotic formula inversion for optically thick stratus clouds. The model of horizontally infinite layer is considered. The slight horizontal heterogeneity is approximately taken into account. Formulas containing only the measured values of two-direction radiance and functions of solar and view angles were derived earlier. The 6 azimuth harmonics of reflection function are taken into account. The simple approximation of the cloud top boarder heterogeneity is used. The clouds, projecting upper the cloud top plane causes the increase of diffuse radiation in the incident flux. It is essential for calculation of radiative characteristics, which depends on lighting conditions. Escape and reflection functions describe this dependence for reflected radiance and local albedo of semi-infinite medium - for irradiance. Thus the functions depending on solar incident angle is to replace by their modifications. Firstly optical thickness of every pixel is obtained with simple formula assuming conservative scattering for all available view directions. Deviations between obtained values may be taken as a measure of the cloud top deviation from the plane. The special parameter is obtained, which takes into account the shadowing effect. Then single scattering albedo and optical thickness (with the true absorption assuming) are obtained for pairs of view directions with equal optical thickness. After that the averaging of values obtained and relative error evaluation is accomplished for all viewing directions of every pixel. The procedure is repeated for all wavelengths and pixels independently.

  7. Apperception of Clouds in AIRS Data

    NASA Technical Reports Server (NTRS)

    Huang, Hung-Lung; Smith, William L.

    2005-01-01

    Our capacity to simulate the radiative characteristics of the Earth system has advanced greatly over the past decade. However, new space based measurements show that idealized simulations might not adequately represent the complexity of nature. For example, AIRS simulated multi-layer cloud clearing research provides an excellent groundwork for early Atmospheric Infra-Red Sounder (AIRS) operational cloud clearing and atmospheric profile retrieval. However, it doesn't reflect the complicated reality of clouds over land and coastal areas. Thus far, operational AIRS/AMSU (Advanced Microwave Sounding Unit) cloud clearing is not only of low yield but also of unsatisfying quality. This is not an argument for avoiding this challenging task, rather a powerful argument for exploring other synergistic approaches, and for adapting these strategies toward improving both indirect and direct use of cloudy infrared sounding data. Ample evidence is shown in this paper that the indirect use of cloudy sounding data by way of cloud clearing is sub-optimal for data assimilation. Improvements are needed in quality control, retrieval yield, and overall cloud clearing retrieval performance. For example, cloud clearing over land, especially over the desert surface, has led to much degraded retrieval quality and often a very low yield of quality controlled cloud cleared radiances. If these indirect cloud cleared radiances are instead to be directly assimilated into NWP models, great caution must be used. Our limited and preliminary cloud clearing results from AIRS/AMSU (with the use of MODIS data) and an AIRS/MODIS synergistic approach have, however, shown that higher spatial resolution multispectral imagery data can provide much needed quality control of the AIRS/AMSU cloud clearing retrieval. When AIRS and Moderate Resolution Imaging Spectroradiometer (MODIS) are used synergistically, a higher spatial resolution over difficult terrain (especially desert areas) can be achieved and with a much improved accuracy. Preliminary statistical analyses of cloud cleared radiances derived from (1) operational AIRS/AMSU, (2) operational AIRS/AMSU plus the use of MODIS data as quality control, and (3) AIRS/MODIS synergistic single channel and two field of views cloud clearing are Our capacity to simulate the radiative characteristics of the Earth system has

  8. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  9. Computation of Solar Radiative Fluxes by 1D and 3D Methods Using Cloudy Atmospheres Inferred from A-train Satellite Data

    NASA Technical Reports Server (NTRS)

    Barker, Howard W.; Kato, Serji; Wehr, T.

    2012-01-01

    The main point of this study was to use realistic representations of cloudy atmospheres to assess errors in solar flux estimates associated with 1D radiative transfer models. A scene construction algorithm, developed for the EarthCARE satellite mission, was applied to CloudSat, CALIPSO, and MODIS satellite data thus producing 3D cloudy atmospheres measuring 60 km wide by 13,000 km long at 1 km grid-spacing. Broadband solar fluxes and radiances for each (1 km)2 column where then produced by a Monte Carlo photon transfer model run in both full 3D and independent column approximation mode (i.e., a 1D model).

  10. Comparison of solar system measured data for various sample rates. [conducted using Marshall Space Flight Center Solar House

    NASA Technical Reports Server (NTRS)

    Chiou, J., Sr.

    1977-01-01

    The results of solar house data for sample rates of 50, 100, 250, 300, and 600 seconds were compared. The data considered for summer days were the heat incident on the collectors, the heat used by the air conditioner generator, and the heat used by the auxiliary heater. For winter days, the heat incident, the heat collected and the heat used by the heat exchanger were computed. These data were compared for different weather days such as clear days, partly cloudy days, cloudy days, and very cloudy days. Also, data for the integration of all these weather days were compared. The precentage differences for these data, using 50 second sample rate as a base, are also presented.

  11. Ice clouds optical properties in the Far Infrared from the ECOWAR-COBRA Experiment

    NASA Astrophysics Data System (ADS)

    Rizzi, Rolando; Tosi, Ennio

    ECOWAR-COBRA (Earth COoling by WAter vapouR emission -Campagna di Osservazioni della Banda Rotazionale del vapor d'Acqua) field campaign took place in Italy from 3 to 17 March 2007 with the main goal of studying the scarcely sensed atmospheric emission occurring beyond 17 microns. Instrumentation involved in the campaign included two different Fourier Transforms Spectrometers (FTS) : REFIR-PAD (at Testa Grigia Station, 3500 m a.s.l.) and FTIR-ABB (at Cervinia Station, 1990 m a.s.l.). In this work cloudy sky data have been ana-lyzed. A cloud properties retrieval methodology (RT-RET), based on high spectral resolution measurements in the atmospheric window (800-1000 cm-1), is applied to both FTS sensors. Cloud properties determined from the infrared retrievals are compared with those obtained from Raman lidar taken by the BASIL Lidar system that was operating at Cervinia station. Cloud microphysical and optical properties retrieved by RT-RET are used to perform forward simulations over the entire FTSs measurements spectral interval. Results are compared to FTS data to test the ability of single scattering ice crystals models to reproduce cloudy sky radiances in the Far Infra-Red (FIR) part of the spectrum. New methods to retrieve cloud optical and microphysical properties exploiting high spectral resolution FIR measurements are also investigated.

  12. Modeling the Blast Load Simulator Airblast Environment using First Principles Codes. Report 1, Blast Load Simulator Environment

    DTIC Science & Technology

    2016-11-01

    ER D C/ G SL T R- 16 -3 1 Modeling the Blast Load Simulator Airblast Environment Using First Principles Codes Report 1, Blast Load...Simulator Airblast Environment using First Principles Codes Report 1, Blast Load Simulator Environment Gregory C. Bessette, James L. O’Daniel...evaluate several first principles codes (FPCs) for modeling airblast environments typical of those encountered in the BLS. The FPCs considered were

  13. 3D MODELING OF GJ1214b's ATMOSPHERE: FORMATION OF INHOMOGENEOUS HIGH CLOUDS AND OBSERVATIONAL IMPLICATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charnay, B.; Meadows, V.; Misra, A.

    2015-11-01

    The warm sub-Neptune GJ1214b has a featureless transit spectrum that may be due to the presence of high and thick clouds or haze. Here, we simulate the atmosphere of GJ1214b with a 3D General Circulation Model for cloudy hydrogen-dominated atmospheres, including cloud radiative effects. We show that the atmospheric circulation is strong enough to transport micrometric cloud particles to the upper atmosphere and generally leads to a minimum of cloud at the equator. By scattering stellar light, clouds increase the planetary albedo to 0.4–0.6 and cool the atmosphere below 1 mbar. However, the heating by ZnS clouds leads to themore » formation of a stratospheric thermal inversion above 10 mbar, with temperatures potentially high enough on the dayside to evaporate KCl clouds. We show that flat transit spectra consistent with Hubble Space Telescope observations are possible if cloud particle radii are around 0.5 μm, and that such clouds should be optically thin at wavelengths >3 μm. Using simulated cloudy atmospheres that fit the observed spectra we generate transit, emission, and reflection spectra and phase curves for GJ1214b. We show that a stratospheric thermal inversion would be readily accessible in near- and mid-infrared atmospheric spectral windows. We find that the amplitude of the thermal phase curves is strongly dependent on metallicity, but only slightly impacted by clouds. Our results suggest that primary and secondary eclipses and phase curves observed by the James Webb Space Telescope in the near- to mid-infrared should provide strong constraints on the nature of GJ1214b's atmosphere and clouds.« less

  14. Cirrus Horizontal Heterogeneity Effects on Cloud Optical Properties Retrieved from MODIS VNIR to TIR Channels as a Function of the Spatial Resolution

    NASA Astrophysics Data System (ADS)

    Fauchez, T.; Platnick, S. E.; Sourdeval, O.; Wang, C.; Meyer, K.; Cornet, C.; Szczap, F.

    2017-12-01

    Cirrus are an important part of the Earth radiation budget but an assessment of their role yet remains highly uncertain. Cirrus optical properties such as Cloud Optical Thickness (COT) and ice crystal effective particle size (Re) are often retrieved with a combination of Visible/Near InfraRed (VNIR) and ShortWave-InfraRed (SWIR) reflectance channels. Alternatively, Thermal InfraRed (TIR) techniques, such as the Split Window Technique (SWT), have demonstrated better sensitivity to thin cirrus. However, current satellite operational products for both retrieval methods assume that cloudy pixels are horizontally homogeneous (Plane Parallel and Homogeneous Approximation (PPHA)) and independent (Independent Pixel Approximation (IPA)). The impact of these approximations on cirrus retrievals needs to be understood and, as far as possible, corrected. Horizontal heterogeneity effects can be more easily estimated and corrected in the TIR range because they are mainly dominated by the PPA bias, which primarily depends on the COT subpixel heterogeneity. For solar reflectance channels, in addition to the PPHA bias, the IPA can lead to significant retrieval errors if there is large photon transport between cloudy columns in addition to brightening and shadowing effects that are more difficult to quantify.The effects of cirrus horizontal heterogeneity are here studied on COT and Re retrievals obtained using simulated MODIS reflectances at 0.86 and 2.11 μm and radiances at 8.5, 11.0 and 12.0 μm, for spatial resolutions ranging from 50 m to 10 km. For each spatial resolution, simulated TOA reflectances and radiances are combined for cloud optical property retrievals with a research-level optimal estimation retrieval method (OEM). The impact of horizontal heterogeneity on the retrieved products is assessed for different solar geometries and various combinations of the five channels.

  15. Aerosol Indirect effect on Stratocumulus Organization

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Heus, T.; Kollias, P.

    2015-12-01

    Large-eddy simulations are used to investigate the role of aerosol loading on organized Stratocumulus. We prescribed the cloud droplet number concentration (Nc) and considered it as the proxy for different aerosol loading. While the presence of drizzle amplifies the mesoscale variability as is in Savic-Jovcic and Stevens (JAS, 2008), two noticeable findings are discussed here: First, the scale of marine boundary layer circulation appears to be independent of aerosol loading, suggesting a major role of the turbulence. The precise role of the turbulence in stratocumulus organization is studied by modifying the large scale fluctuations from the LES domain. Second, while it is commonly thought that the whole circulation needs to be represented for robust cloud development, we find that stratocumulus dynamics, including variables like w'w' and w'w'w', are remarkably robust even if large scales are ignored by simply reducing the domain sizes. The only variable that is sensitive to the change of the scale is the amount of cloudiness. Despite their smaller cloud thickness and inhomogeneous macroscopic structure for low Nc, individual drizzling clouds have sizes that are commensurate with circulation scale. We observe an Nc threshold below which stratocumulus is thin enough so that a little decrease of Nc would lead to great change of cloud fraction. The simulated cloud albedo is more sensitive to in-cloud liquid water content than to the amount of cloudiness since the former decreases at least three times faster than the latter due to drizzle. The main impact of drizzle evaporation is observed to keep the sub-cloud layer moist and as a result to extend the lifetime of stratocumulus by a couple of hours.

  16. Environmental controls on the increasing GPP of terrestrial vegetation across northern Eurasia

    NASA Astrophysics Data System (ADS)

    Dass, P.; Rawlins, M. A.; Kimball, J. S.; Kim, Y.

    2016-01-01

    Terrestrial ecosystems of northern Eurasia are demonstrating an increasing gross primary productivity (GPP), yet few studies have provided definitive attribution for the changes. While prior studies point to increasing temperatures as the principle environmental control, influences from moisture and other factors are less clear. We assess how changes in temperature, precipitation, cloudiness, and forest fires individually contribute to changes in GPP derived from satellite data across northern Eurasia using a light-use- efficiency-based model, for the period 1982-2010. We find that annual satellite-derived GPP is most sensitive to the temperature, precipitation and cloudiness of summer, which is the peak of the growing season and also the period of the year when the GPP trend is maximum. Considering the regional median, the summer temperature explains as much as 37.7 % of the variation in annual GPP, while precipitation and cloudiness explain 20.7 and 19.3 %. Warming over the period analysed, even without a sustained increase in precipitation, led to a significant positive impact on GPP for 61.7 % of the region. However, a significant negative impact on GPP was also found, for 2.4 % of the region, primarily the dryer grasslands in the south-west of the study area. For this region, precipitation positively correlates with GPP, as does cloudiness. This shows that the south-western part of northern Eurasia is relatively more vulnerable to drought than other areas. While our results further advance the notion that air temperature is the dominant environmental control for recent GPP increases across northern Eurasia, the role of precipitation and cloudiness can not be ignored.

  17. Cloudy and starry milia-like cysts: how well do they distinguish seborrheic keratoses from malignant melanomas?

    PubMed

    Stricklin, S M; Stoecker, W V; Oliviero, M C; Rabinovitz, H S; Mahajan, S K

    2011-10-01

    Seborrheic keratoses are the most common skin lesions known to contain small white or yellow structures called milia-like cysts (MLCs). Varied appearances can sometimes make it difficult to differentiate benign lesions from malignant lesions such as melanoma, the deadliest form of skin cancer found in humans. The purpose of this study was to determine the statistical occurrence of MLCs in benign vs. malignant lesions. A medical student with 10 months experience in examining approximately 1000 dermoscopy images and a dermoscopy-naïve observer analysed contact non-polarized dermoscopy images of 221 malignant melanomas and 175 seborrheic keratoses for presence of MLCs. The observers found two different types of MLCs present: large ones described as cloudy and smaller ones described as starry. Starry MLCs were found to be prevalent in both seborrheic keratoses and melanomas. Cloudy MLCs, however, were found to have 99.1% specificity for seborrheic keratoses among this group of seborrheic keratoses and melanomas. Cloudy MLCs can be a useful tool for differentiating between seborrheic keratoses and melanomas. © 2010 The Authors. Journal of the European Academy of Dermatology and Venereology © 2010 European Academy of Dermatology and Venereology.

  18. Cloudy and starry milia-like cysts: how well do they distinguish seborrheic keratoses from malignant melanomas?

    PubMed Central

    Stricklin, S.M.; Stoecker, W.V.; Oliviero, M.C.; Rabinovitz, H.S.; Mahajan, S.K.

    2011-01-01

    Background Seborrheic keratoses are the most common skin lesions known to contain small white or yellow structures called milia-like cysts (MLCs). Varied appearances can sometimes make it difficult to differentiate benign lesions from malignant lesions such as melanoma, the deadliest form of skin cancer found in humans. Objective The purpose of this study was to determine the statistical occurrence of MLCs in benign vs. malignant lesions. Methods A medical student with 10 months experience in examining approximately 1000 dermoscopy images and a dermoscopy-naïve observer analysed contact non-polarized dermoscopy images of 221 malignant melanomas and 175 seborrheic keratoses for presence of MLCs. Results The observers found two different types of MLCs present: large ones described as cloudy and smaller ones described as starry. Starry MLCs were found to be prevalent in both seborrheic keratoses and melanomas. Cloudy MLCs, however, were found to have 99.1% specificity for seborrheic keratoses among this group of seborrheic keratoses and melanomas. Conclusion Cloudy MLCs can be a useful tool for differentiating between seborrheic keratoses and melanomas. Received: 18 June 2010; Accepted: 27 October 2010 PMID:21923811

  19. A Quality Control study of the distribution of NOAA MIRS Cloudy retrievals during Hurricane Sandy

    NASA Astrophysics Data System (ADS)

    Fletcher, S. J.

    2013-12-01

    Cloudy radiance present a difficult challenge to data assimilation (DA) systems, through both the radiative transfer system as well the hydrometers required to resolve the cloud and precipitation. In most DA systems the hydrometers are not control variables due to many limitations. The National Oceanic and Atmospheric Administration's (NOAA) Microwave Integrated Retrieval System (MIRS) is producing products from the NPP-ATMS satellite where the scene is cloud and precipitation affected. The test case that we present here is the life time of Hurricane and then Superstorm Sandy in October 2012. As a quality control study we shall compare the retrieved water vapor content during the lifetime of Sandy with the first guess and the analysis from the NOAA Gridpoint Statistical Interpolation (GSI) system. The assessment involves the gross error check system against the first guess with different values for the observational error's variance to see if the difference is within three standard deviations. We shall also compare against the final analysis at the relevant cycles to see if the products which have been retrieved through a cloudy radiance are similar, given that the DA system does not assimilate cloudy radiances yet.

  20. Aerosol effect on the evolution of the thermodynamic properties of warm convective cloud fields

    PubMed Central

    Dagan, Guy; Koren, Ilan; Altaratz, Orit; Heiblum, Reuven H.

    2016-01-01

    Convective cloud formation and evolution strongly depend on environmental temperature and humidity profiles. The forming clouds change the profiles that created them by redistributing heat and moisture. Here we show that the evolution of the field’s thermodynamic properties depends heavily on the concentration of aerosol, liquid or solid particles suspended in the atmosphere. Under polluted conditions, rain formation is suppressed and the non-precipitating clouds act to warm the lower part of the cloudy layer (where there is net condensation) and cool and moisten the upper part of the cloudy layer (where there is net evaporation), thereby destabilizing the layer. Under clean conditions, precipitation causes net warming of the cloudy layer and net cooling of the sub-cloud layer (driven by rain evaporation), which together act to stabilize the atmosphere with time. Previous studies have examined different aspects of the effects of clouds on their environment. Here, we offer a complete analysis of the cloudy atmosphere, spanning the aerosol effect from instability-consumption to enhancement, below, inside and above warm clouds, showing the temporal evolution of the effects. We propose a direct measure for the magnitude and sign of the aerosol effect on thermodynamic instability. PMID:27929097

  1. Aerosol effect on the evolution of the thermodynamic properties of warm convective cloud fields.

    PubMed

    Dagan, Guy; Koren, Ilan; Altaratz, Orit; Heiblum, Reuven H

    2016-12-08

    Convective cloud formation and evolution strongly depend on environmental temperature and humidity profiles. The forming clouds change the profiles that created them by redistributing heat and moisture. Here we show that the evolution of the field's thermodynamic properties depends heavily on the concentration of aerosol, liquid or solid particles suspended in the atmosphere. Under polluted conditions, rain formation is suppressed and the non-precipitating clouds act to warm the lower part of the cloudy layer (where there is net condensation) and cool and moisten the upper part of the cloudy layer (where there is net evaporation), thereby destabilizing the layer. Under clean conditions, precipitation causes net warming of the cloudy layer and net cooling of the sub-cloud layer (driven by rain evaporation), which together act to stabilize the atmosphere with time. Previous studies have examined different aspects of the effects of clouds on their environment. Here, we offer a complete analysis of the cloudy atmosphere, spanning the aerosol effect from instability-consumption to enhancement, below, inside and above warm clouds, showing the temporal evolution of the effects. We propose a direct measure for the magnitude and sign of the aerosol effect on thermodynamic instability.

  2. A Neural Network Based Intelligent Predictive Sensor for Cloudiness, Solar Radiation and Air Temperature

    PubMed Central

    Ferreira, Pedro M.; Gomes, João M.; Martins, Igor A. C.; Ruano, António E.

    2012-01-01

    Accurate measurements of global solar radiation and atmospheric temperature, as well as the availability of the predictions of their evolution over time, are important for different areas of applications, such as agriculture, renewable energy and energy management, or thermal comfort in buildings. For this reason, an intelligent, light-weight and portable sensor was developed, using artificial neural network models as the time-series predictor mechanisms. These have been identified with the aid of a procedure based on the multi-objective genetic algorithm. As cloudiness is the most significant factor affecting the solar radiation reaching a particular location on the Earth surface, it has great impact on the performance of predictive solar radiation models for that location. This work also represents one step towards the improvement of such models by using ground-to-sky hemispherical colour digital images as a means to estimate cloudiness by the fraction of visible sky corresponding to clouds and to clear sky. The implementation of predictive models in the prototype has been validated and the system is able to function reliably, providing measurements and four-hour forecasts of cloudiness, solar radiation and air temperature. PMID:23202230

  3. Seasonal simulations of the planetary boundary layer and boundary-layer stratocumulus clouds with a general circulation model

    NASA Technical Reports Server (NTRS)

    Randall, D. A.; Abeles, J. A.; Corsetti, T. G.

    1985-01-01

    The formulation of the planetary boundary layer (PBL) and stratocumulus parametrizations in the UCLA general circulation model (GCM) are briefly summarized, and extensive new results are presented illustrating some aspects of the simulated seasonal changes of the global distributions of PBL depth, stratocumulus cloudiness, cloud-top entrainment instability, the cumulus mass flux, and related fields. Results from three experiments designed to reveal the sensitivity of the GCM results to aspects of the PBL and stratocumulus parametrizations are presented. The GCM results show that the layer cloud instability appears to limit the extent of the marine subtropical stratocumulus regimes, and that instability frequently occurs in association with cumulus convection over land. Cumulus convection acts as a very significant sink of PBL mass throughout the tropics and over the midlatitude continents in winter.

  4. Ultraviolet resources over Northern Eurasia.

    PubMed

    Chubarova, Natalia; Zhdanova, Yekaterina

    2013-10-05

    We propose a new climatology of UV resources over Northern Eurasia, which includes the assessments of both detrimental (erythema) and positive (vitamin D synthesis) effects of ultraviolet radiation on human health. The UV resources are defined by using several classes and subclasses - UV deficiency, UV optimum, and UV excess - for 6 different skin types. To better quantifying the vitamin D irradiance threshold we accounted for an open body fraction S as a function of effective air temperature. The spatial and temporal distribution of UV resources was estimated by radiative transfer (RT) modeling (8 stream DISORT RT code) with 1×1° grid and monthly resolution. For this purpose special datasets of main input geophysical parameters (total ozone content, aerosol characteristics, surface UV albedo, UV cloud modification factor) have been created over the territory of Northern Eurasia. The new approaches were used to retrieve aerosol parameters and cloud modification factor in the UV spectral region. As a result, the UV resources were obtained for clear-sky and mean cloudy conditions for different skin types. We show that the distribution of UV deficiency, UV optimum and UV excess is regulated by various geophysical parameters (mainly, total ozone, cloudiness and open body fraction) and can significantly deviate from latitudinal dependence. We also show that the UV optimum conditions can be simultaneously observed for people with different skin types (for example, for 4-5 skin types at the same time in spring over Western Europe). These UV optimum conditions for different skin types occupy a much larger territory over Europe than that over Asia. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2014-05-01

    The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.

  6. Fast Simulators for Satellite Cloud Optical Centroid Pressure Retrievals, 1. Evaluation of OMI Cloud Retrievals

    NASA Technical Reports Server (NTRS)

    Joiner, J.; Vasilkov, A. P.; Gupta, Pawan; Bhartia, P. K.; Veefkind, Pepijn; Sneep, Maarten; deHaan, Johan; Polonsky, Igor; Spurr, Robert

    2011-01-01

    We have developed a relatively simple scheme for simulating retrieved cloud optical centroid pressures (OCP) from satellite solar backscatter observations. We have compared simulator results with those from more detailed retrieval simulators that more fully account for the complex radiative transfer in a cloudy atmosphere. We used this fast simulator to conduct a comprehensive evaluation of cloud OCPs from the two OMI algorithms using collocated data from CloudSat and Aqua MODIS, a unique situation afforded by the A-train formation of satellites. We find that both OMI algorithms perform reasonably well and that the two algorithms agree better with each other than either does with the collocated CloudSat data. This indicates that patchy snow/ice, cloud 3D, and aerosol effects not simulated with the CloudSat data are affecting both algorithms similarly. We note that the collocation with CloudSat occurs mainly on the East side of OMI's swath. Therefore, we are not able to address cross-track biases in OMI cloud OCP retrievals. Our fast simulator may also be used to simulate cloud OCP from output generated by general circulation models (GCM) with appropriate account of cloud overlap. We have implemented such a scheme and plan to compare OMI data with GCM output in the near future.

  7. Carbon monoxide column retrieval for clear-sky and cloudy atmospheres: a full-mission data set from SCIAMACHY 2.3 µm reflectance measurements

    NASA Astrophysics Data System (ADS)

    Borsdorff, Tobias; aan de Brugh, Joost; Hu, Haili; Nédélec, Philippe; Aben, Ilse; Landgraf, Jochen

    2017-05-01

    We discuss the retrieval of carbon monoxide (CO) vertical column densities from clear-sky and cloud contaminated 2311-2338 nm reflectance spectra measured by the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) from January 2003 until the end of the mission in April 2012. These data were processed with the Shortwave Infrared CO Retrieval algorithm (SICOR) that we developed for the operational data processing of the Tropospheric Monitoring Instrument (TROPOMI) that will be launched on ESA's Sentinel-5 Precursor (S5P) mission. This study complements previous work that was limited to clear-sky observations over land. Over the oceans, CO is estimated from cloudy-sky measurements only, which is an important addition to the SCIAMACHY clear-sky CO data set as shown by NDACC and TCCON measurements at coastal sites. For Ny-Ålesund, Lauder, Mauna Loa and Reunion, a validation of SCIAMACHY clear-sky retrievals is not meaningful because of the high retrieval noise and the few collocations at these sites. The situation improves significantly when considering cloudy-sky observations, where we find a low mean bias b = ±6. 0 ppb and a strong correlation between the validation and the SCIAMACHY results with a mean Pearson correlation coefficient r = 0. 7. Also for land observations, cloudy-sky CO retrievals present an interesting complement to the clear-sky data set. For example, at the cities Tehran and Beijing the agreement of SCIAMACHY clear-sky CO observations with MOZAIC/IAGOS airborne measurements is poor with a mean bias of b = 171. 2 ppb and 57.9 ppb because of local CO pollution, which cannot be captured by SCIAMACHY. For cloudy-sky retrievals, the validation improves significantly. Here the retrieved column is mainly sensitive to CO above the cloud and so not affected by the strong local surface emissions. Adjusting the MOZAIC/IAGOS measurements to the vertical sensitivity of the retrieval, the mean bias adds up to b = 52. 3 ppb and 5.0 ppb for Tehran and Beijing. At the less urbanised region around the airport Windhoek, local CO pollution is less prominent and so MOZAIC/IAGOS measurements agree well with SCIAMACHY clear-sky retrievals with a mean bias of b = 15. 5 ppb, but can be even further improved for cloudy SCIAMACHY observations with a mean bias of b = 0. 2 ppb. Overall the cloudy-sky CO retrievals from SCIAMACHY short-wave infrared measurements present a major extension of the clear-sky-only data set, which more than triples the amount of data and adds unique observations over the oceans. Moreover, the study represents the first application of the S5P algorithm for operational CO data processing on cloudy observations prior to the launch of the S5P mission.

  8. A comparison between implicit and hybrid methods for the calculation of steady and unsteady inlet flows

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.; Hsieh, T.

    1985-01-01

    Numerical simulation of steady and unsteady transonic diffuser flows using two different computer codes are discussed and compared with experimental data. The codes solve the Reynolds-averaged, compressible, Navier-Stokes equations using various turbulence models. One of the codes has been applied extensively to diffuser flows and uses the hybrid method of MacCormack. This code is relatively inefficient numerically. The second code, which was developed more recently, is fully implicit and is relatively efficient numerically. Simulations of steady flows using the implicit code are shown to be in good agreement with simulations using the hybrid code. Both simulations are in good agreement with experimental results. Simulations of unsteady flows using the two codes are in good qualitative agreement with each other, although the quantitative agreement is not as good as in the steady flow cases. The implicit code is shown to be eight times faster than the hybrid code for unsteady flow calculations and up to 32 times faster for steady flow calculations. Results of calculations using alternative turbulence models are also discussed.

  9. Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images

    DTIC Science & Technology

    2009-12-01

    Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design

  10. Propierties of dust in circumstellar gas around Wolf-Rayet stars

    NASA Astrophysics Data System (ADS)

    Jiménez-Hernández, P.; Arthur, S. J.; Toalá, J. A.

    2017-11-01

    Using archive photometric observations from Herschel (70μm, 100μm, 160μm and 250μm), Spitzer (24μm) and WISE (22μm and 12μm) we obtained infrared SED's of nebulae around the Wolf-Rayet stars WR 124, WR 16 and WR 7. We used the photoionization code Cloudy to construct models of the nebulae, taking into account the spectrum of the central star and varying the density and distance of the photoionized shell as well as the size distribution and chemical composition of the dust grains mixed with the gas, and we compared the resulting SEDs with the observations in order to study the properties of the dust in these objects. We discuss whether the dust properties depend on the spectral type of the central star and the age of the nebulae.

  11. Water ice clouds on Mars: a study of partial cloudiness with a global climate model and MARCI data

    NASA Astrophysics Data System (ADS)

    Pottier, Alizée; Montmessin, Franck; Forget, François; Wolff, Mike; Navarro, Thomas; Millour, Ehouarn; Madeleine, Jean-Baptiste; Spiga, Aymeric; Bertrand, Tanguy

    2015-04-01

    There is a large reservoir of water ice on Mars in the polar caps, that sublimates in summer and releases water vapor. Water is then advected in the atmospheric circulation that evolves seasonally. This vapor forms clouds, frost, and can also be adsorbed in the soil. In a global study of the water cycle, water ice clouds play a key part in the martian climate. There is a need to understand better their distribution and radiative effect. The tool used in this study is the global climate model (GCM) of the Laboratoire de Météorologie Dynamique. It is made up of a core that computes fluid dynamics, and a physical part that gathers a number of parametrised processes. It includes tracers and the condensation and sublimation of water in the atmosphere and on the ground, allowing a study of the complete water cycle. To improve the representation of water ice clouds in the model, a new parametrisation of partial cloudiness has been implemented and will be presented. Indeed, model cells are hundreds of kilometers wide, and it is quite unrealistic to suppose that cloud coverage is always uniform in them. Furthermore, the model was quite unstable since the implementation of the radiative effect of clouds, and partial cloudiness had the effect of reducing this instability. In practice, a subgrid temperature distribution is supposed, and the temperature computed in the model is interpreted as its mean. The subgrid scale temperature distribution is simple, and its width is a free parameter. Using this distribution, the fraction of the grid cells under the water vapor condensation temperature is interpreted as the fraction of the cell in which clouds form (or cloud fraction). From these fractions at each height a total partial cloudiness (the clouds as seen from the orbit) is deduced. The radiative transfer is computed twice, for the clear area and for the cloudy one. Observing the water cycle with this new parametrisation, some differences are seen with standard runs. These changes mainly affect the aphelion cloud belt and the polar hoods. Partial cloudiness is compared to higher resolution (one per one degree) runs in which cloudiness diagnostics are done. MARCI data of cloud opacity is also used to verify the predicted water ice cloud distribution and patchiness. The aim is to understand the causes of patchiness and to validate the choice of a subgrid scale temperature distribution. There are seasonal variations, recurring patterns near major topographical features.

  12. Characterizing Transiting Planets with JWST Spectra: Simulations and Retrievals

    NASA Technical Reports Server (NTRS)

    Greene, Tom; Line, Michael; Fortney, Jonathan

    2015-01-01

    There are now well over a thousand confirmed exoplanets, ranging from hot to cold and large to small worlds. JWST spectra will provide much more detailed information on the molecular constituents, chemical compositions, and thermal properties of the atmospheres of transiting planets than is now known. We explore this by modeling clear, cloudy,and high mean molecular weight atmospheres of typical hot Jupiter, warm Neptune, warm sub-Neptune, and cool super-Earth planets and then simulating their JWST transmission and emission spectra. These simulations were performed for several JWST instrument modes over 1 - 11 microns and incorporate realistic signal and noise components. We then performed state-of the art retrievals to determine how well temperatures and abundances (CO, CO2, H2O, NH3) will be constrained and over what pressures for these different planet types. Using these results, we appraise what instrument modes will be most useful for determining what properties of the different planets, and we assess how well we can constrain their compositions, CO ratios, and temperature profiles.

  13. Waves in a Cloudy Vortex

    DTIC Science & Technology

    2007-02-01

    Waves in a Cloudy Vortex DAVID A. SCHECTER Department of Atmospheric Science, Colorado State University, Fort Collins, Colorado MICHAEL T. MONTGOMERY...waves account for precessing tilts and elliptical (triangular, square, etc.) deformations of the vortex core. If the Rossby number of the cyclone ex...ceeds unity, its baroclinic VR waves can efficiently ex- Corresponding author address: Dr. David Schecter, NorthWest Research Associates, 14508 NE 20th

  14. Decision-Based Design of a Low Vision Aid

    DTIC Science & Technology

    1999-12-05

    No loss ML 32 Aniridia Cloudy No Loss TT 35 Glaucoma Cloudy No Loss MM 23 Stargardt Disease Clear Loss AP 40 Congenital Retinal Malformations...brief clinical background evaluation, and a post-experiment questionnaire. The study was further expanded in scope to gather data for the VRD...Introduction and clinical evaluation 3) Experiments with red, blue and green images 4) Experiments with white images 5) Post experiment

  15. Local effects of partly cloudy skies on solar and emitted radiations

    NASA Technical Reports Server (NTRS)

    Whitney, D. A.; Venable, D. D.

    1981-01-01

    Solar radiation measurements are made on a routine basis. Global solar, atmospheric emitted, downwelled diffuse solar, and direct solar radiation measurement systems are fully operational with the first two in continuous operation. Fractional cloud cover measurements are made from GOES imagery or from ground based whole sky photographs. Normalized global solar irradiance values for partly cloudy skies were correlated to fractional cloud cover.

  16. Effect of enzymatic mash treatment and storage on phenolic composition, antioxidant activity, and turbidity of cloudy apple juice.

    PubMed

    Oszmiański, Jan; Wojdylo, Aneta; Kolniak, Joanna

    2009-08-12

    The effects of different commercial enzymatic mash treatments on yield, turbidity, color, and polyphenolic and sediment of procyanidins content of cloudy apple juice were studied. Addition of pectolytic enzymes to mash treatment had positive effect on the production of cloud apple juices by improving polyphenolic contents, especially procyanidins and juice yields (68.3% in control samples to 77% after Pectinex Yield Mash). As summary of the effect of enzymatic mash treatment, polyphenol contents in cloudy apple juices significantly increased after Pectinex Yield Mash, Pectinex Smash XXL, and Pectinex XXL maceration were applied but no effect was observed after Pectinex Ultra-SPL I Panzym XXL use, compared to the control samples. The content of polymeric procyanidins represented 50-70% of total polyphenols, but in the present study, polymeric procyanidins were significantly lower in juices than in fruits and also affected by enzymatic treatment (Pectinex AFP L-4 and Panzym Yield Mash) compared to the control samples. The enzymatic treatment decreased procyanidin content in most sediment with the exception of Pectinex Smash XXL and Pectinex AFP L-4. Generally in samples that were treated by pectinase, radical scavenging activity of cloudy apple juices was increased compared to the untreated reference samples. The highest radical scavenging activity was associated with Pectinex Yield Mash, Pectinex Smash XXL, and Pectinex XXL enzyme and the lowest activity with Pectinex Ultra SP-L and Pectinex APFL-4. However, in the case of enzymatic mash treatment cloudy apple juices showed instability of turbidity and low viscosity. These results must be ascribed to the much higher hydrolysis of pectin by enzymatic preparation which is responsible for viscosity. During 6 months of storage at 4 degrees C small changes in analyzed parameters of apple juices were observed.

  17. Methods for Cloud Cover Estimation

    NASA Technical Reports Server (NTRS)

    Glackin, D. L.; Huning, J. R.; Smith, J. H.; Logan, T. L.

    1984-01-01

    Several methods for cloud cover estimation are described relevant to assessing the performance of a ground-based network of solar observatories. The methods rely on ground and satellite data sources and provide meteorological or climatological information. One means of acquiring long-term observations of solar oscillations is the establishment of a ground-based network of solar observatories. Criteria for station site selection are: gross cloudiness, accurate transparency information, and seeing. Alternative methods for computing this duty cycle are discussed. The cycle, or alternatively a time history of solar visibility from the network, can then be input to a model to determine the effect of duty cycle on derived solar seismology parameters. Cloudiness from space is studied to examine various means by which the duty cycle might be computed. Cloudiness, and to some extent transparency, can potentially be estimated from satellite data.

  18. ANNarchy: a code generation approach to neural simulations on parallel hardware

    PubMed Central

    Vitay, Julien; Dinkelbach, Helge Ü.; Hamker, Fred H.

    2015-01-01

    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions. PMID:26283957

  19. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  20. Aerosol indirect effects -- general circulation model intercomparison and evaluation with satellite data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quaas, Johannes; Ming, Yi; Menon, Surabi

    2009-04-10

    Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterizes aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth (Ta) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is found thatmore » the model-simulated influence of aerosols on cloud droplet number concentration (Nd) compares relatively well to the satellite data at least over the ocean. The relationship between Ta and liquid water path is simulated much too strongly by the models. It is shown that this is partly related to the representation of the second aerosol indirect effect in terms of autoconversion. A positive relationship between total cloud fraction (fcld) and Ta as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong fcld - Ta relationship, our results indicate that none can be identified as unique explanation. Relationships similar to the ones found in satellite data between Ta and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR - Ta relationship show a strong positive correlation between Ta and fcld The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of Ta, and parameterisation assumptions such as a lower bound on Nd. Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of -1.5+-0.5 Wm-2. An alternative estimate obtained by scaling the simulated clear- and cloudy-sky forcings with estimates of anthropogenic Ta and satellite-retrieved Nd - Ta regression slopes, respectively, yields a global annual mean clear-sky (aerosol direct effect) estimate of -0.4+-0.2 Wm-2 and a cloudy-sky (aerosol indirect effect) estimate of -0.7+-0.5 Wm-2, with a total estimate of -1.2+-0.4 Wm-2.« less

  1. Development of an atmospheric infrared radiation model with high clouds for target detection

    NASA Astrophysics Data System (ADS)

    Bellisario, Christophe; Malherbe, Claire; Schweitzer, Caroline; Stein, Karin

    2016-10-01

    In the field of target detection, the simulation of the camera FOV (field of view) background is a significant issue. The presence of heterogeneous clouds might have a strong impact on a target detection algorithm. In order to address this issue, we present here the construction of the CERAMIC package (Cloudy Environment for RAdiance and MIcrophysics Computation) that combines cloud microphysical computation and 3D radiance computation to produce a 3D atmospheric infrared radiance in attendance of clouds. The input of CERAMIC starts with an observer with a spatial position and a defined FOV (by the mean of a zenithal angle and an azimuthal angle). We introduce a 3D cloud generator provided by the French LaMP for statistical and simplified physics. The cloud generator is implemented with atmospheric profiles including heterogeneity factor for 3D fluctuations. CERAMIC also includes a cloud database from the French CNRM for a physical approach. We present here some statistics developed about the spatial and time evolution of the clouds. Molecular optical properties are provided by the model MATISSE (Modélisation Avancée de la Terre pour l'Imagerie et la Simulation des Scènes et de leur Environnement). The 3D radiance is computed with the model LUCI (for LUminance de CIrrus). It takes into account 3D microphysics with a resolution of 5 cm-1 over a SWIR bandwidth. In order to have a fast computation time, most of the radiance contributors are calculated with analytical expressions. The multiple scattering phenomena are more difficult to model. Here a discrete ordinate method with correlated-K precision to compute the average radiance is used. We add a 3D fluctuations model (based on a behavioral model) taking into account microphysics variations. In fine, the following parameters are calculated: transmission, thermal radiance, single scattering radiance, radiance observed through the cloud and multiple scattering radiance. Spatial images are produced, with a dimension of 10 km x 10 km and a resolution of 0.1 km with each contribution of the radiance separated. We present here the first results with examples of a typical scenarii. A 1D comparison in results is made with the use of the MATISSE model by separating each radiance calculated, in order to validate outputs. The code performance in 3D is shown by comparing LUCI to SHDOM model, referency code which uses the Spherical Harmonic Discrete Ordinate Method for 3D Atmospheric Radiative Transfer model. The results obtained by the different codes present a strong agreement and the sources of small differences are considered. An important gain in time is observed for LUCI versus SHDOM. We finally conclude on various scenarios for case analysis.

  2. Impacts of Diffuse Radiation on Light Use Efficiency across Terrestrial Ecosystems Based on Eddy Covariance Observation in China

    PubMed Central

    Huang, Kun; Wang, Shaoqiang; Zhou, Lei; Wang, Huimin; Zhang, Junhui; Yan, Junhua; Zhao, Liang; Wang, Yanfen; Shi, Peili

    2014-01-01

    Ecosystem light use efficiency (LUE) is a key factor of production models for gross primary production (GPP) predictions. Previous studies revealed that ecosystem LUE could be significantly enhanced by an increase on diffuse radiation. Under large spatial heterogeneity and increasing annual diffuse radiation in China, eddy covariance flux data at 6 sites across different ecosystems from 2003 to 2007 were used to investigate the impacts of diffuse radiation indicated by the cloudiness index (CI) on ecosystem LUE in grassland and forest ecosystems. Our results showed that the ecosystem LUE at the six sites was significantly correlated with the cloudiness variation (0.24≤R2≤0.85), especially at the Changbaishan temperate forest ecosystem (R2 = 0.85). Meanwhile, the CI values appeared more frequently between 0.8 and 1.0 in two subtropical forest ecosystems (Qianyanzhou and Dinghushan) and were much larger than those in temperate ecosystems. Besides, cloudiness thresholds which were favorable for enhancing ecosystem carbon sequestration existed at the three forest sites, respectively. Our research confirmed that the ecosystem LUE at the six sites in China was positively responsive to the diffuse radiation, and the cloudiness index could be used as an environmental regulator for LUE modeling in regional GPP prediction. PMID:25393629

  3. Impacts of diffuse radiation on light use efficiency across terrestrial ecosystems based on Eddy covariance observation in China.

    PubMed

    Huang, Kun; Wang, Shaoqiang; Zhou, Lei; Wang, Huimin; Zhang, Junhui; Yan, Junhua; Zhao, Liang; Wang, Yanfen; Shi, Peili

    2014-01-01

    Ecosystem light use efficiency (LUE) is a key factor of production models for gross primary production (GPP) predictions. Previous studies revealed that ecosystem LUE could be significantly enhanced by an increase on diffuse radiation. Under large spatial heterogeneity and increasing annual diffuse radiation in China, eddy covariance flux data at 6 sites across different ecosystems from 2003 to 2007 were used to investigate the impacts of diffuse radiation indicated by the cloudiness index (CI) on ecosystem LUE in grassland and forest ecosystems. Our results showed that the ecosystem LUE at the six sites was significantly correlated with the cloudiness variation (0.24 ≤ R(2) ≤ 0.85), especially at the Changbaishan temperate forest ecosystem (R(2) = 0.85). Meanwhile, the CI values appeared more frequently between 0.8 and 1.0 in two subtropical forest ecosystems (Qianyanzhou and Dinghushan) and were much larger than those in temperate ecosystems. Besides, cloudiness thresholds which were favorable for enhancing ecosystem carbon sequestration existed at the three forest sites, respectively. Our research confirmed that the ecosystem LUE at the six sites in China was positively responsive to the diffuse radiation, and the cloudiness index could be used as an environmental regulator for LUE modeling in regional GPP prediction.

  4. Program Code Generator for Cardiac Electrophysiology Simulation with Automatic PDE Boundary Condition Handling

    PubMed Central

    Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira

    2015-01-01

    Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082

  5. Tristan code and its application

    NASA Astrophysics Data System (ADS)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  6. Climate Simulations from Super-parameterized and Conventional General Circulation Models with a Third-order Turbulence Closure

    NASA Astrophysics Data System (ADS)

    Xu, Kuan-Man; Cheng, Anning

    2014-05-01

    A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three models (CAM5, CAM5-IPHOC and SPCAM-IPHOC), with emphasis on low-level clouds and precipitation. Detailed comparisons of scatter diagrams among the monthly-mean low-level cloudiness, PBL height, surface relative humidity and lower tropospheric stability (LTS) reveal the relative strengths and weaknesses for five coastal low-cloud regions among the three models. Observations from CloudSat and CALIPSO and ECMWF Interim reanalysis are used as the truths for the comparisons. We found that the standard CAM5 underestimates cloudiness and produces small cloud fractions at low PBL heights that contradict with observations. CAM5-IPHOC tends to overestimate low clouds but the ranges of LTS and PBL height variations are most realistic. SPCAM-IPHOC seems to produce most realistic results with relatively consistent results from one region to another. Further comparisons with other atmospheric environmental variables will be helpful to reveal the causes of model deficiencies so that SPCAM-IPHOC results will provide guidance to the other two models.

  7. Numerical simulation of the world ocean circulation

    NASA Technical Reports Server (NTRS)

    Takano, K.; Mintz, Y.; Han, Y. J.

    1973-01-01

    A multi-level model, based on the primitive equations, is developed for simulating the temperature and velocity fields produced in the world ocean by differential heating and surface wind stress. The model ocean has constant depth, free slip at the lower boundary, and neglects momentum advection; so that there is no energy exchange between the barotropic and baroclinic components of the motion, although the former influences the latter through temperature advection. The ocean model was designed to be coupled to the UCLA atmospheric general circulation model, for the study of the dynamics of climate and climate changes. But here, the model is tested by prescribing the observed seasonally varying surface wind stress and the incident solar radiation, the surface air temperature and humidity, cloudiness and the surface wind speed, which, together with the predicted ocean surface temperature, determine the surface flux of radiant energy, sensible heat and latent heat.

  8. Estimation of sea surface temperature from remote measurements in the 11-13 micron window region

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Conrath, B. J.; Kunde, V. G.

    1972-01-01

    The Nimbus-4 IRIS data was examined in the spectral region 775 to 1250/cm (8-13 microns) for useful information to determine the sea surface temperature. The high spectral resolution data of IRIS was degraded to low resolution by averaging to simulate a multi-channel radiometer in the window region. These simulated data show that within the region 775-975/cm (12.9-10.25 microns) the brightness temperatures are linearly related to the absorption parameters. Such a linear relationship is observed over cloudy as well as clear regions and over a wide range of latitudes. From this linear relationship it is feasible to correct for the atmospheric attenuation and get the sea surface temperature, accurate to within 1 K, in a cloud free field of view. The information about the cloud cover is taken from the TV pictures and BUV albedo measurements on board the Nimbus-4 satellite.

  9. Estimation of Asian Dust Aerosol Effect on Cloud Radiation Forcing Using Fu-Liou Radiative Model and CERES Measurements

    NASA Technical Reports Server (NTRS)

    Su, Jing; Huang, Jianping; Fu, Qiang; Minnis, Patrick; Ge, Jinming; Bi, Jianrong

    2008-01-01

    The impact of Asian dust on cloud radiative forcing during 2003-2006 is studied by using the Earth's Radiant Energy Budget Scanner (CERES) data and the Fu-Liou radiative transfer model. Analysis of satellite data shows that the dust aerosol significantly reduced the cloud cooling effect at TOA. In dust contaminated cloudy regions, the 4-year mean values of the instantaneous shortwave, longwave and net cloud radiative forcing are -138.9, 69.1, and -69.7 Wm(sup -2), which are 57.0, 74.2, and 46.3%, respectively, of the corresponding values in more pristine cloudy regions. The satellite-retrieved cloud properties are significantly different in the dusty regions and can influence the radiative forcing indirectly. The contributions to the cloud radiation forcing by the dust direct, indirect and semi-direct effects are estimated using combined satellite observations and Fu-Liou model simulation. The 4-year mean value of combination of indirect and semi-direct shortwave radiative forcing (SWRF) is 82.2 Wm(sup -2), which is 78.4% of the total dust effect. The direct effect is only 22.7 Wm(sup -2), which is 21.6% of the total effect. Because both first and second indirect effects enhance cloud cooling, the aerosol-induced cloud warming is mainly the result of the semi-direct effect of dust.

  10. Auto Code Generation for Simulink-Based Attitude Determination Control System

    NASA Technical Reports Server (NTRS)

    MolinaFraticelli, Jose Carlos

    2012-01-01

    This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.

  11. Exoplanet modelling with the Met Office Unified Model

    NASA Astrophysics Data System (ADS)

    Boutle, Ian; Lines, Stefan; Mayne, Nathan; Lee, Graham; Helling, Christiane; Drummond, Ben; Manners, James; Goyal, Jayesh; Lambert, Hugo; Acreman, David; Earnshaw, Paul; Amundsen, David; Baraffe, Isabelle

    2017-04-01

    This talk will present an overview of work being done to adapt the Unified Model, one of the most sophisticated weather and climate models of this planet, into a flexible planet simulator for use in the study of any exoplanet. We will focus on two current projects: Clouds in hot Jupiter atmospheres - recent HST observations have revealed a continuum in atmospheric composition from cloudy to clear skies. The presence of clouds is inferred from a grey opacity in the near-IR that mutes key absorption features in the transmission spectra. Unlike the L-T Brown Dwarf sequence, this transition does not correlate well with equilibrium temperature, suggesting that a cloud formation scheme more comprehensive than simply considering the condensation temperature needed for homogenous cloud growth, is required. In our work, we conduct 3D simulations of cloud nucleation, growth, advection, evaporation and gravitational settling in the atmospheres of HD209458b and HD189733 using the kinetic and mixed-grain cloud formation code DIHRT, coupled to the Unified Model. We explore cloud composition, vertical structure and particle sizes, as well as highlighting the importance of the strong atmospheric dynamics seen in tidally locked hot Jupiters on the evolution and distribution of the cloud. Climate of Proxima B - we present results of simulations of the climate of the newly discovered planet Proxima Centauri B, examining the responses of both an `Earth-like' atmosphere and simplified nitrogen and trace carbon dioxide atmosphere to the radiation likely received. Overall, our results are in agreement with previous studies in suggesting Proxima Centauri B may well have surface temperatures conducive to the presence of liquid water. Moreover, we have expanded the parameter regime over which the planet may support liquid water to higher values of eccentricity and lower incident fluxes, guided by observational constraints. This increased parameter space arises because of the low sensitivity of the planet to changes in stellar flux, a consequence of the stellar spectrum and orbital configuration. Finally, we have produced high resolution planetary emission and reflectance spectra, and highlight signatures of gases vital to the evolution of life on Earth (oxygen, ozone and carbon dioxide).

  12. Assessment and Application of the ROSE Code for Reactor Outage Thermal-Hydraulic and Safety Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, Thomas K.S.; Ko, F.-K.; Dai, L.-C

    The currently available tools, such as RELAP5, RETRAN, and others, cannot easily and correctly perform the task of analyzing the system behavior during plant outages. Therefore, a medium-sized program aiming at reactor outage simulation and evaluation, such as midloop operation (MLO) with loss of residual heat removal (RHR), has been developed. Important thermal-hydraulic processes involved during MLO with loss of RHR can be properly simulated by the newly developed reactor outage simulation and evaluation (ROSE) code. The two-region approach with a modified two-fluid model has been adopted to be the theoretical basis of the ROSE code.To verify the analytical modelmore » in the first step, posttest calculations against the integral midloop experiments with loss of RHR have been performed. The excellent simulation capacity of the ROSE code against the Institute of Nuclear Energy Research Integral System Test Facility test data is demonstrated. To further mature the ROSE code in simulating a full-sized pressurized water reactor, assessment against the WGOTHIC code and the Maanshan momentary-loss-of-RHR event has been undertaken. The successfully assessed ROSE code is then applied to evaluate the abnormal operation procedure (AOP) with loss of RHR during MLO (AOP 537.4) for the Maanshan plant. The ROSE code also has been successfully transplanted into the Maanshan training simulator to support operator training. How the simulator was upgraded by the ROSE code for MLO will be presented in the future.« less

  13. Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation

    NASA Technical Reports Server (NTRS)

    Padilla, Jose F.

    2010-01-01

    Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.

  14. Cloudy with a Chance of Sarcasm or Sunny with High Expectations: Using Best Practice Language to Strengthen Positive Behavior Intervention and Support Efforts

    ERIC Educational Resources Information Center

    Holloman, Hal; Yates, Peggy H.

    2013-01-01

    What's the forecast in your classroom? Are you forecasting cloudy with a chance of sarcasm or sunny with high expectations? A teacher's Language of Practice holds the key to creating a climate of mutual respect in our schools. This article will explore the power and promise of "teacher language," and how it can be used to…

  15. Diurnal variability of regional cloud and clear-sky radiative parameters derived from GOES data. I - Analysis method. II - November 1978 cloud distributions. III - November 1978 radiative parameters

    NASA Technical Reports Server (NTRS)

    Minnis, P.; Harrison, E. F.

    1984-01-01

    Cloud cover is one of the most important variables affecting the earth radiation budget (ERB) and, ultimately, the global climate. The present investigation is concerned with several aspects of the effects of extended cloudiness, taking into account hourly visible and infrared data from the Geostationary Operational Environmental Satelite (GOES). A methodology called the hybrid bispectral threshold method is developed to extract regional cloud amounts at three levels in the atmosphere, effective cloud-top temperatures, clear-sky temperature and cloud and clear-sky visible reflectance characteristics from GOES data. The diurnal variations are examined in low, middle, high, and total cloudiness determined with this methodology for November 1978. The bulk, broadband radiative properties of the resultant cloud and clear-sky data are estimated to determine the possible effect of the diurnal variability of regional cloudiness on the interpretation of ERB measurements.

  16. Giardia cyst destruction: effectiveness of six small-quantity water disinfection methods.

    PubMed

    Jarroll, E L; Bingham, A K; Meyer, E A

    1980-01-01

    None of the available chemical methods for disinfecting drinking water has ever been tested for its ability to destory Giardia cysts. We tested the ability of six such methods to act against Giardia, using excystation as the criterion of viability. Two water qualities (cloudy and clear) and two temperatures (3 and 20 degrees C) were tested. At 20 degrees C, using cloudy and clear water, all of the method proved completely effective. However, at 3 degrees C, in cloudy water one method ("saturated" iodine) was less than completely effective, and in clear water four methods (bleach, Globaline, tincture of iodine and "saturated" iodine) failed to destory all of the cysts. The failure of these methods appears to be related to either an insufficient halogen residual or contact time. This study underlines the importance of considering water temperature when employing halogen disinfection methods.

  17. Invar alloys: information from the study of iron meteorites.

    NASA Astrophysics Data System (ADS)

    Goldstein, J. I.; Williams, D. B.; Zhang, J.; Clarke, R.

    The iron meteorites were slow cooled (<108years) in their asteroidal bodies and are useful as indicators of the phase transformations which occur in Fe-Ni alloys. In the invar composition range, the iron meteorites contain a cloudy zone structure composed of an ordered tetrataenite phase and a surrounding honeycomb phase either of gamma or alpha phase. This structure is the result of a spinodal reaction below 350°C. The Santa Catharina iron meteorite has the typical invar composition of 36 wt% Ni and its structure is entirely cloudy zone although some of the honeycomb phase has been oxidized by terrestrial corrosion. Invar alloys would contain such a cloudy zone structure if more time was available for cooling. A higher temperature spinodal in the Fe-Ni phase diagram may be operative in invar alloys but has not been observed in the structure of the iron meteorites.

  18. Effect of Radiative Cooling on Cloud-SST Relationship within the Tropical Pacific Region

    NASA Technical Reports Server (NTRS)

    Sui, Chung-Hsiung; Ho, Chang-Hoi; Chou, Ming-Dah; Lau, Ka-Ming; Li, Xiao-Fan; Einaudi, Franco (Technical Monitor)

    2000-01-01

    A recent analysis found a negative correlation between the area-mean cloud amount and the corresponding mean Sea Surface Temperature (SST) within the cloudy areas. The SST-cloud relation becomes more evident when the SST contrast between warm pool and surrounding cold pool (DSST) in the tropical Pacific is stronger than normal. The above feature is related to the finding that the strength of subsidence over the cold pool is limited by radiative cooling because of its small variability. As a result, the area of radiatively-driven subsidence must expand in response to enhanced low-boundary forcing due to SST warming or enhanced basin-scale DSST. This leads to more cloud free regions and less cloudy regions. The increased ratio of cloud-free areas to cloudy areas leads to more high SST areas (>29.50C) due to enhanced solar radiation.

  19. North American west coast summer low cloudiness: Broadscale variability associated with sea surface temperature

    NASA Astrophysics Data System (ADS)

    Schwartz, Rachel E.; Gershunov, Alexander; Iacobellis, Sam F.; Cayan, Daniel R.

    2014-05-01

    Six decades of observations at 20 coastal airports, from Alaska to southern California, reveal coherent interannual to interdecadal variation of coastal low cloudiness (CLC) from summer to summer over this broad region. The leading mode of CLC variability represents coherent variation, accounting for nearly 40% of the total CLC variance spanning 1950-2012. This leading mode and the majority of individual airports exhibit decreased low cloudiness from the earlier to the later part of the record. Exploring climatic controls on CLC, we identify North Pacific Sea Surface Temperature anomalies, largely in the form of the Pacific Decadal Oscillation (PDO) as well correlated with, and evidently helping to organize, the coherent patterns of summer coastal cloud variability. Links from the PDO to summer CLC appear a few months in advance of the summer. These associations hold up consistently in interannual and interdecadal frequencies.

  20. Cloudy Earth

    NASA Image and Video Library

    2015-05-08

    Decades of satellite observations and astronaut photographs show that clouds dominate space-based views of Earth. One study based on nearly a decade of satellite data estimated that about 67 percent of Earth’s surface is typically covered by clouds. This is especially the case over the oceans, where other research shows less than 10 percent of the sky is completely clear of clouds at any one time. Over land, 30 percent of skies are completely cloud free. Earth’s cloudy nature is unmistakable in this global cloud fraction map, based on data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite. While MODIS collects enough data to make a new global map of cloudiness every day, this version of the map shows an average of all of the satellite’s cloud observations between July 2002 and April 2015. Colors range from dark blue (no clouds) to light blue (some clouds) to white (frequent clouds).

  1. The Impact of Amazonian Deforestation on Dry-Season Rainfall

    NASA Technical Reports Server (NTRS)

    Negri, Andrew J.; Adler, Robert F.; Xu, Li-Ming; Surratt, Jason; Starr, David OC. (Technical Monitor)

    2002-01-01

    Many modeling studies have concluded that widespread deforestation of Amazonia would lead to decreased rainfall. We analyze geosynchronous infrared satellite data with respect percent cloudiness, and analyze rain estimates from microwave sensors aboard the Tropical Rainfall Measuring Mission satellite. We conclude that in the dry-season, when the effects of the surface are not overwhelmed by synoptic-scale weather disturbances, deep convective cloudiness, as well as rainfall occurrence, all increase over the deforested and non-forested (savanna) regions. This is in response to a local circulation initiated by the differential heating of the region's varying forestation. Analysis of the diurnal cycle of cloudiness reveals a shift toward afternoon hours in the deforested and savanna regions, compared to the forested regions. Analysis of 14 years of data from the Special Sensor Microwave/Imager data revealed that only in August did rainfall amounts increase over the deforested region.

  2. Harmful and favourable ultraviolet conditions for human health over Northern Eurasia

    NASA Astrophysics Data System (ADS)

    Chubarova, Nataly; Zhdanova, Ekaterina

    2014-05-01

    We provide the analysis of the spatial and temporal distribution of ultraviolet (UV) radiation over Northern Eurasia taking into account for both its detrimental (erythema and eye-damage effects) and favourable (vitamin D synthesis) influence on human health. The UV effects on six different skin types are considered in order to cover the variety of skin types of European and Asian inhabitants. To better quantifying the vitamin D irradiance threshold we accounted for an open body fraction S as a function of effective air temperature. The spatial and temporal distribution of UV resources was estimated by radiative transfer (RT) modeling (8 stream DISORT RT code) with 1x 1 degree grid and monthly resolution. For this purpose special datasets of main input geophysical parameters (total ozone content, aerosol characteristics, surface UV albedo, UV cloud modification factor) have been created over the territory of Northern Eurasia, which can be of separate interest for the different multidisciplinary scientific applications over the PEEX domain. The new approaches were used to retrieve aerosol and cloud transmittance from different satellite and re-analysis datasets for calculating the solar UV irradiance at ground. Using model simulations and some experimental data we provide the altitude parameterization for different types of biologically active irradiance in mountainous area taking into account not only for the effects of molecular scattering but for the altitude dependence of aerosol parameters and surface albedo. Based on the new classification of UV resources (Chubarova, Zhdanova, 2013) we show that the distribution of harmful (UV deficiency and UV excess) and favorable UV conditions is regulated by various geophysical parameters (mainly, total ozone, cloudiness and open body fraction) and can significantly deviate from latitudinal dependence. The interactive tool for providing simulations of biologically active irradiance and its attribution to the different classes of UV resources is demonstrated. Reference: Natalia Chubarova, Yekaterina Zhdanova. Ultraviolet resources over Northern Eurasia, Photochemistry and Photobiology, Elsevier, 127, 2013, p. 38-51

  3. The galactic luminous supersoft X-ray source RXJ0925.7-4758 / MR Vel

    NASA Astrophysics Data System (ADS)

    Prodhani, Nandita; Baruah, Monmoyuri

    2018-02-01

    A steady-state model has been considered to explain the observed properties of the LSSS RXJ0925.7-4748 / MR Vel. The steady-state models consist of a C-O core surrounded by a hydrogen-rich envelope of the solar abundances. At the bottom of the envelope, hydrogen is burned at the same rate as the star accreted it. Using the most recent proton capturing reaction rates and β -decay rates, the cyclic reactions have been studied. In the present work, effort has been made to explain the observed characteristics of the source RXJ0925.7-4758 / MR Vel considering the above mentioned model. The calculated values of luminosity (8.56 × 10^{37} erg s^{-1}) and effective temperature (94.19 eV) tally well with the observed one. Photoionisation code CLOUDY has been used to explain the observed absorption edges in the spectrum of RXJ0925.7-4758 / MR Vel.

  4. SPITZER IRAC COLOR DIAGNOSTICS FOR EXTENDED EMISSION IN STAR-FORMING REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ybarra, Jason E.; Tapia, Mauricio; Román-Zúñiga, Carlos G.

    2014-10-20

    The infrared data from the Spitzer Space Telescope are an invaluable tool for identifying physical processes in star formation. In this study, we calculate the Infrared Array Camera (IRAC) color space of UV fluorescent H{sub 2} and polycyclic aromatic hydrocarbon (PAH) emission in photodissociation regions (PDRs) using the Cloudy code with PAH opacities from Draine and Li. We create a set of color diagnostics that can be applied to study the structure of PDRs and to distinguish between FUV-excited and shock-excited H{sub 2} emission. To test this method, we apply these diagnostics to Spitzer IRAC data of NGC 2316. Our analysismore » of the structure of the PDR is consistent with previous studies of the region. In addition to UV excited emission, we identify shocked gas that may be part of an outflow originating from the cluster.« less

  5. Scattering in infrared radiative transfer: A comparison between the spectrally averaging model JURASSIC and the line-by-line model KOPRA

    NASA Astrophysics Data System (ADS)

    Griessbach, Sabine; Hoffmann, Lars; Höpfner, Michael; Riese, Martin; Spang, Reinhold

    2013-09-01

    The viability of a spectrally averaging model to perform radiative transfer calculations in the infrared including scattering by atmospheric particles is examined for the application of infrared limb remote sensing measurements. Here we focus on the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) aboard the European Space Agency's Envisat. Various spectra for clear air and cloudy conditions were simulated with a spectrally averaging radiative transfer model and a line-by-line radiative transfer model for three atmospheric window regions (825-830, 946-951, 1224-1228 cm-1) and compared to each other. The results are rated in terms of the MIPAS noise equivalent spectral radiance (NESR). The clear air simulations generally agree within one NESR. The cloud simulations neglecting the scattering source term agree within two NESR. The differences between the cloud simulations including the scattering source term are generally below three and always below four NESR. We conclude that the spectrally averaging approach is well suited for fast and accurate infrared radiative transfer simulations including scattering by clouds. We found that the main source for the differences between the cloud simulations of both models is the cloud edge sampling. Furthermore we reasoned that this model comparison for clouds is also valid for atmospheric aerosol in general.

  6. The Particle Accelerator Simulation Code PyORBIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorlov, Timofey V; Holmes, Jeffrey A; Cousineau, Sarah M

    2015-01-01

    The particle accelerator simulation code PyORBIT is presented. The structure, implementation, history, parallel and simulation capabilities, and future development of the code are discussed. The PyORBIT code is a new implementation and extension of algorithms of the original ORBIT code that was developed for the Spallation Neutron Source accelerator at the Oak Ridge National Laboratory. The PyORBIT code has a two level structure. The upper level uses the Python programming language to control the flow of intensive calculations performed by the lower level code implemented in the C++ language. The parallel capabilities are based on MPI communications. The PyORBIT ismore » an open source code accessible to the public through the Google Open Source Projects Hosting service.« less

  7. Combustor Simulation

    NASA Technical Reports Server (NTRS)

    Norris, Andrew

    2003-01-01

    The goal was to perform 3D simulation of GE90 combustor, as part of full turbofan engine simulation. Requirements of high fidelity as well as fast turn-around time require massively parallel code. National Combustion Code (NCC) was chosen for this task as supports up to 999 processors and includes state-of-the-art combustion models. Also required is ability to take inlet conditions from compressor code and give exit conditions to turbine code.

  8. Constraining UV Continuum Slopes of Active Galactic Nuclei with CLOUDY Models of Broad-line Region Extreme-ultraviolet Emission Lines

    NASA Astrophysics Data System (ADS)

    Moloney, Joshua; Shull, J. Michael

    2014-10-01

    Understanding the composition and structure of the broad-line region (BLR) of active galactic nuclei (AGNs) is important for answering many outstanding questions in supermassive black hole evolution, galaxy evolution, and ionization of the intergalactic medium. We used single-epoch UV spectra from the Cosmic Origins Spectrograph (COS) on the Hubble Space Telescope to measure EUV emission-line fluxes from four individual AGNs with 0.49 <= z <= 0.64, two AGNs with 0.32 <= z <= 0.40, and a composite of 159 AGNs. With the CLOUDY photoionization code, we calculated emission-line fluxes from BLR clouds with a range of density, hydrogen ionizing flux, and incident continuum spectral indices. The photoionization grids were fit to the observations using single-component and locally optimally emitting cloud (LOC) models. The LOC models provide good fits to the measured fluxes, while the single-component models do not. The UV spectral indices preferred by our LOC models are consistent with those measured from COS spectra. EUV emission lines such as N IV λ765, O II λ833, and O III λ834 originate primarily from gas with electron temperatures between 37,000 K and 55,000 K. This gas is found in BLR clouds with high hydrogen densities (n H >= 1012 cm-3) and hydrogen ionizing photon fluxes (ΦH >= 1022 cm-2 s-1). Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS5-26555.

  9. Impact of spatial resolution on cirrus infrared satellite retrievals in the presence of cloud heterogeneity

    NASA Astrophysics Data System (ADS)

    Fauchez, T.; Platnick, S. E.; Meyer, K.; Zhang, Z.; Cornet, C.; Szczap, F.; Dubuisson, P.

    2015-12-01

    Cirrus clouds are an important part of the Earth radiation budget but an accurate assessment of their role remains highly uncertain. Cirrus optical properties such as Cloud Optical Thickness (COT) and ice crystal effective particle size are often retrieved with a combination of Visible/Near InfraRed (VNIR) and ShortWave-InfraRed (SWIR) reflectance channels. Alternatively, Thermal InfraRed (TIR) techniques, such as the Split Window Technique (SWT), have demonstrated better accuracy for thin cirrus effective radius retrievals with small effective radii. However, current global operational algorithms for both retrieval methods assume that cloudy pixels are horizontally homogeneous (Plane Parallel Approximation (PPA)) and independent (Independent Pixel Approximation (IPA)). The impact of these approximations on ice cloud retrievals needs to be understood and, as far as possible, corrected. Horizontal heterogeneity effects in the TIR spectrum are mainly dominated by the PPA bias that primarily depends on the COT subpixel heterogeneity; for solar reflectance channels, in addition to the PPA bias, the IPA can lead to significant retrieval errors due to a significant photon horizontal transport between cloudy columns, as well as brightening and shadowing effects that are more difficult to quantify. Furthermore TIR retrievals techniques have demonstrated better retrieval accuracy for thin cirrus having small effective radii over solar reflectance techniques. The TIR range is thus particularly relevant in order to characterize, as accurately as possible, thin cirrus clouds. Heterogeneity effects in the TIR are evaluated as a function of spatial resolution in order to estimate the optimal spatial resolution for TIR retrieval applications. These investigations are performed using a cirrus 3D cloud generator (3DCloud), a 3D radiative transfer code (3DMCPOL), and two retrieval algorithms, namely the operational MODIS retrieval algorithm (MOD06) and a research-level SWT algorithm.

  10. An approach for coupled-code multiphysics core simulations from a common input

    DOE PAGES

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; ...

    2014-12-10

    This study describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which ismore » built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak Ridge National Laboratory using 1156 cores, and a synopsis of the solution results and code performance is presented. Finally, ongoing development of this approach is also briefly described.« less

  11. Investigation on the Capability of a Non Linear CFD Code to Simulate Wave Propagation

    DTIC Science & Technology

    2003-02-01

    Linear CFD Code to Simulate Wave Propagation Pedro de la Calzada Pablo Quintana Manuel Antonio Burgos ITP, S.A. Parque Empresarial Fernando avenida...mechanisms above presented, simulation of unsteady aerodynamics with linear and nonlinear CFD codes is an ongoing activity within the turbomachinery industry

  12. Software quality and process improvement in scientific simulation codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambrosiano, J.; Webster, R.

    1997-11-01

    This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.

  13. Reconciling Simulated and Observed Views of Clouds: MODIS, ISCCP, and the Limits of Instrument Simulators in Climate Models

    NASA Technical Reports Server (NTRS)

    Pincus, Robert; Platnick, Steven E.; Ackerman, Steve; Hemler, Richard; Hofmann, Patrick

    2011-01-01

    The properties of clouds that may be observed by satellite instruments, such as optical depth and cloud top pressure, are only loosely related to the way clouds are represented in models of the atmosphere. One way to bridge this gap is through "instrument simulators," diagnostic tools that map the model representation to synthetic observations so that differences between simulator output and observations can be interpreted unambiguously as model error. But simulators may themselves be restricted by limited information available from the host model or by internal assumptions. This work examines the extent to which instrument simulators are able to capture essential differences between MODIS and ISCCP, two similar but independent estimates of cloud properties. We focus on the stark differences between MODIS and ISCCP observations of total cloudiness and the distribution of cloud optical thickness can be traced to different approaches to marginal pixels, which MODIS excludes and ISCCP treats as homogeneous. These pixels, which likely contain broken clouds, cover about 15% of the planet and contain almost all of the optically thinnest clouds observed by either instrument. Instrument simulators can not reproduce these differences because the host model does not consider unresolved spatial scales and so can not produce broken pixels. Nonetheless, MODIS and ISCCP observation are consistent for all but the optically-thinnest clouds, and models can be robustly evaluated using instrument simulators by excluding ambiguous observations.

  14. Production code control system for hydrodynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration managementmore » system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.« less

  15. Modeling the Dynamic Change of Air Quality and its Response to Emission Trends

    NASA Astrophysics Data System (ADS)

    Zhou, Wei

    This thesis focuses on evaluating atmospheric chemistry and transport models' capability in simulating the chemistry and dynamics of power plant plumes, evaluating their strengths and weaknesses in predicting air quality trends at regional scales, and exploring air quality trends in an urban area. First, the Community Mutlti-scale Air Quality (CMAQ) model is applied to simulate the physical and chemical evolution of power plant plumes (PPPs) during the second Texas Air Quality Study (TexAQS) in 2006. SO2 and NOy were observed to be rapidly removed from PPPs on cloudy days but not on cloud-free days, indicating efficient aqueous processing of these compounds in clouds, while the model fails to capture the rapid loss of SO2 and NOy in some plumes on the cloudy day. Adjustments to cloud liquid water content (QC) and the default metal concentrations in the cloud module could explain some of the SO 2 loss while NOy in the model was insensitive to QC. Second, CMAQ is applied to simulate the ozone (O3) change after the NO x SIP Call and mobile emission controls in the eastern U.S. from 2002 to 2006. Observed downward changes in 8-hour O3 concentrations in the NOx SIP Call region were under-predicted by 26%--66%. The under-prediction in O3 improvements could be alleviated by 5%--31% by constraining NOx emissions in each year based on observed NOx concentrations while temperature biases or uncertainties in chemical reactions had minor impact on simulated O3 trends. Third, changes in ozone production in the Houston area is assessed with airborne measurements from TexAQS 2000 and 2006. Simultaneous declines in nitrogen oxides (NOx=NO+NO2) and highly reactive Volatile Organic Compounds (HRVOCs) were observed in the Houston Ship Channel (HSC). The reduction in HRVOCs led to the decline in total radical concentration by 20-50%. Rapid ozone production rates in the Houston area declined by 40-50% from 2000 to 2006, to which the reduction in NOx and HRVOCs had the similar contribution. Houston petrochemical and urban plumes largely remained in a strong VOC-sensitive regime of ozone formation and maintained high Ozone Production Efficiency (OPE: 5-15).

  16. Simulation of spacecraft attitude dynamics using TREETOPS and model-specific computer Codes

    NASA Technical Reports Server (NTRS)

    Cochran, John E.; No, T. S.; Fitz-Coy, Norman G.

    1989-01-01

    The simulation of spacecraft attitude dynamics and control using the generic, multi-body code called TREETOPS and other codes written especially to simulate particular systems is discussed. Differences in the methods used to derive equations of motion--Kane's method for TREETOPS and the Lagrangian and Newton-Euler methods, respectively, for the other two codes--are considered. Simulation results from the TREETOPS code are compared with those from the other two codes for two example systems. One system is a chain of rigid bodies; the other consists of two rigid bodies attached to a flexible base body. Since the computer codes were developed independently, consistent results serve as a verification of the correctness of all the programs. Differences in the results are discussed. Results for the two-rigid-body, one-flexible-body system are useful also as information on multi-body, flexible, pointing payload dynamics.

  17. Western Pacific Basin: A Climatological Study

    DTIC Science & Technology

    2003-08-29

    most precipitation, while the lee sides of the mountains experience much less cloudiness and rainfall. In some valleys sheltered from the monsoon...One station, Ambon, on the south coast of a small, mountainous island south of Ceram, is sheltered from the northeast flow, and is less cloudy... and radiation cooling reduces the temperature. Wamena, in a sheltered basin, reports a January mean high of 73° F (23° C) and a mean low of 68° F

  18. ROVIBRATIONALLY RESOLVED DIRECT PHOTODISSOCIATION THROUGH THE LYMAN AND WERNER TRANSITIONS OF H{sub 2} FOR FUV/X-RAY-IRRADIATED ENVIRONMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gay, C. D.; Porter, R. L.; Stancil, P. C.

    Using ab initio potential curves and dipole transition moments, cross-section calculations were performed for the direct continuum photodissociation of H{sub 2} through the B{sup 1}{Sigma}{sup +}{sub u} <- X{sup 1}{Sigma}{sup +}{sub g} (Lyman) and C{sup 1}{Pi}{sub u} <- X{sup 1}{Sigma}{sup +}{sub g} (Werner) transitions. Partial cross-sections were obtained for wavelengths from 100 A to the dissociation threshold between the upper electronic state and each of the 301 bound rovibrational levels v''J'' within the ground electronic state. The resulting cross-sections are incorporated into three representative classes of interstellar gas models: diffuse clouds, photon-dominated regions, and X-ray-dominated regions (XDRs). The models, whichmore » used the CLOUDY plasma/molecular spectra simulation code, demonstrate that direct photodissociation is comparable to fluorescent dissociation (or spontaneous radiative dissociation, the Solomon process) as an H{sub 2} destruction mechanism in intense far-ultraviolet or X-ray-irradiated gas. In particular, changes in H{sub 2} rotational column densities are found to be as large as 20% in the XDR model with the inclusion of direct photodissociation. The photodestruction rate from some high-lying rovibrational levels can be enhanced by pumping from H Ly{beta} due to a wavelength coincidence with cross-section resonances resulting from quasi-bound levels of the upper electronic states. Given the relatively large size of the photodissociation data set, a strategy is described to create truncated, but reliable, cross-section data consistent with the wavelength resolving power of typical observations.« less

  19. Interannual variability in stratiform cloudiness and sea surface temperature

    NASA Technical Reports Server (NTRS)

    Norris, Joel R.; Leovy, Conway B.

    1994-01-01

    Marine stratiform cloudiness (MSC)(stratus, stratocumulus, and fog) is widespread over subtropical oceans west of the continents and over midlatitude oceans during summer, the season when MSC has maximum influence on surface downward radiation and is most influenced by boundary-layer processes. Long-term datasets of cloudiness and sea surface teperature (SST) from surface observations from 1952 to 1981 are used to examine interannual variations in MSC and SST. Linear correlations of anomalies in seasonal MSC amount with seasonal SST anomalies are negative and significant in midlatitude and eastern subtropical oceans, especially during summer. Significant negative correlations between SST and nimbostratus and nonprecipitating midlevel cloudiness are also observed at midlatitudes during summer, suggesting that summer storm tracks shift from year to year following year-to-year meridional shifts in the SST gradient. Over the 30-yr period, there are significant upward trends in MSC amount over the northern midlatitude oceans and a significant downward trend off the coast of California. The highest correlations and trends occur where gradients in MSC and SST are strongest. During summer, correlations between SST and MSC anomalies peak at zero lag in midlatitudes where warm advection prevails, but SST lags MSC in subtropical regions where cold advection predominates. This difference is attributed to a tendency for anomalies in latent heat flux to compensate anomalies in surface downward radiation in warm advection regions but not in cold advection regions.

  20. Enhanced clear sky reflectance near clouds: What can be learned from it about aerosol properties?

    NASA Astrophysics Data System (ADS)

    Marshak, A.; Varnai, T.; Wen, G.; Chiu, J.

    2009-12-01

    Studies on aerosol direct and indirect effects require a precise separation of cloud-free and cloudy air. However, separation between cloud-free and cloudy areas from remotely-sensed measurements is ambiguous. The transition zone in the regions around clouds often stretches out tens of km, which are neither precisely clear nor precisely cloudy. We study the transition zone between cloud-free and cloudy air using MODerate-resolution Imaging Spectroradiometer (MODIS) and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements. Both instruments show enhanced clear-sky reflectance (MODIS) and clear-sky backscatterer (CALIPSO) near clouds. Analyzing a large dataset of MODIS observations, we examine the effect of three-dimensional radiative interactions between clouds and cloud-free areas, also known as a cloud adjacency effect. The cloud adjacency effect is well observed in MODIS clear-sky data in the vicinity of clouds. Comparing with CALIPSO clear-sky backscatterer measurements, we show that this effect may be responsible for a large portion of the enhanced clear-sky reflectance observed by MODIS. Finally, we describe a simple model that estimates the cloud-induced enhanced reflectances of cloud-free areas in the vicinity of clouds. The model assumes that the enhancement is due entirely to Rayleigh scattering and is therefore bigger at shorter wavelengths, thus creating a so-called apparent “bluing” of aerosols in remote sensing retrievals.

  1. Analyzing Multidecadal Trends in Cloudiness Over the Subtropical Andes Mountains of South America Using a Regional Climate Model.

    NASA Astrophysics Data System (ADS)

    Zaitchik, B. F.; Russell, A.; Gnanadesikan, A.

    2016-12-01

    Satellite-based products indicate that many parts of South America have been experiencing increases in outgoing longwave radiation (OLR) and corresponding decreases in cloudiness over the last few decades, with the strongest trends occurring in the subtropical Andes Mountains - an area that is highly vulnerable to climate change due to its reliance on glacial melt for dry-season runoff. Changes in cloudiness may be contributing to increases in atmospheric temperature, thereby raising the freezing level height (FLH) - a critical geophysical parameter. Yet these trends are only partially captured in reanalysis products, while AMIP climate models generally show no significant trend in OLR over this timeframe, making it difficult to determine the underlying drivers. Therefore, controlled numerical experiments with a regional climate model are performed in order to investigate drivers of the observed OLR and cloudiness trends. The Weather Research and Forecasting model (WRF) is used here because it offers several advantages over global models, including higher resolution - a critical asset in areas of complex topography - as well as flexible physics, parameterization, and data assimilation capabilities. It is likely that changes in the mean states and meridional gradients of SSTs in the Pacific and Atlantic oceans are driving regional trends in clouds. A series of lower boundary manipulations are performed with WRF to determine to what extent changes in SSTs influence regional OLR.

  2. Characterization of bubble core and cloudiness in Yb3+:Sr5(PO4)3F crystals using Micro-Raman spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Y; Roy, U N; Bai, L

    Ytterbium doped strontium fluoroapatite Yb{sup 3+}:Sr{sub 5}(PO{sub 4}){sub 3}F (Yb: S-FAP) crystals have been used in High Average Power Laser systems as gain medium. Growth induced defects associated with the crystal often affect their performance. In order to improve the crystal quality and its optical applications, it is imperative to understand the nature of these defects. In this study, we utilize Micro-Raman spectroscopy to characterize two common growth-induced defects: bubble core and cloudiness. We find the bubble core consist of voids and microcrystals of Yb: S-FAP. These microcrystals have very different orientation from that of the pure crystal outside themore » bubble core. In contrast to a previous report, neither Sr{sub 3}(PO{sub 4}){sub 2} nor Yb{sub 2}O{sub 3} are observed in the bubble core regions. On the other hand, the cloudy regions are made up of the host materials blended with a structural deformation along with impurities which include CaCO{sub 3}, YbPO{sub 4}, SrHPO{sub 4} and Sr{sub 2}P{sub 2}O{sub 7}. The impurities are randomly distributed in the cloudy regions. This analysis is necessary for understanding and eliminating these growth defects in Yb:S-FAP crystals.« less

  3. Reduction of tropical cloudiness by soot

    PubMed

    Ackerman; Toon; Stevens; Heymsfield; Ramanathan; Welton

    2000-05-12

    Measurements and models show that enhanced aerosol concentrations can augment cloud albedo not only by increasing total droplet cross-sectional area, but also by reducing precipitation and thereby increasing cloud water content and cloud coverage. Aerosol pollution is expected to exert a net cooling influence on the global climate through these conventional mechanisms. Here, we demonstrate an opposite mechanism through which aerosols can reduce cloud cover and thus significantly offset aerosol-induced radiative cooling at the top of the atmosphere on a regional scale. In model simulations, the daytime clearing of trade cumulus is hastened and intensified by solar heating in dark haze (as found over much of the northern Indian Ocean during the northeast monsoon).

  4. The seasonal cycle of snow cover, sea ice and surface albedo

    NASA Technical Reports Server (NTRS)

    Robock, A.

    1980-01-01

    The paper examines satellite data used to construct mean snow cover caps for the Northern Hemisphere. The zonally averaged snow cover from these maps is used to calculate the seasonal cycle of zonally averaged surface albedo. The effects of meltwater on the surface, solar zenith angle, and cloudiness are parameterized and included in the calculations of snow and ice albedo. The data allows a calculation of surface albedo for any land or ocean 10 deg latitude band as a function of surface temperature ice and snow cover; the correct determination of the ice boundary is more important than the snow boundary for accurately simulating the ice and snow albedo feedback.

  5. A solar charge and discharge controller for wireless sensor nodes

    NASA Astrophysics Data System (ADS)

    Dang, Yibo; Shen, Shu

    2018-02-01

    Aiming at the energy supply problem that restricts the life of wireless sensor nodes, a solar energy charge and discharge controller suitable for wireless sensor nodes is designed in this paper. A Microcontroller is used as the core of the solar charge and discharge controller. The software of the solar charge and discharge controller adopts the C language to realize the program of the main control module. Firstly, the function of monitoring solar panel voltage and lithium battery voltage are simulated by Protel software, and the charge time is tested in cloudy and overcast outdoor environment. The results of the experiment show that our controller meets the power supply demand of wireless sensor nodes.

  6. Main steam line break accident simulation of APR1400 using the model of ATLAS facility

    NASA Astrophysics Data System (ADS)

    Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.

    2018-02-01

    A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.

  7. Python Radiative Transfer Emission code (PyRaTE): non-LTE spectral lines simulations

    NASA Astrophysics Data System (ADS)

    Tritsis, A.; Yorke, H.; Tassis, K.

    2018-05-01

    We describe PyRaTE, a new, non-local thermodynamic equilibrium (non-LTE) line radiative transfer code developed specifically for post-processing astrochemical simulations. Population densities are estimated using the escape probability method. When computing the escape probability, the optical depth is calculated towards all directions with density, molecular abundance, temperature and velocity variations all taken into account. A very easy-to-use interface, capable of importing data from simulations outputs performed with all major astrophysical codes, is also developed. The code is written in PYTHON using an "embarrassingly parallel" strategy and can handle all geometries and projection angles. We benchmark the code by comparing our results with those from RADEX (van der Tak et al. 2007) and against analytical solutions and present case studies using hydrochemical simulations. The code will be released for public use.

  8. The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics

    NASA Astrophysics Data System (ADS)

    Ganander, Hans

    2003-10-01

    For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.

  9. MOCCA code for star cluster simulation: comparison with optical observations using COCOA

    NASA Astrophysics Data System (ADS)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Olech, Arkadiusz; Hypki, Arkadiusz

    2016-02-01

    We introduce and present preliminary results from COCOA (Cluster simulatiOn Comparison with ObservAtions) code for a star cluster after 12 Gyr of evolution simulated using the MOCCA code. The COCOA code is being developed to quickly compare results of numerical simulations of star clusters with observational data. We use COCOA to obtain parameters of the projected cluster model. For comparison, a FITS file of the projected cluster was provided to observers so that they could use their observational methods and techniques to obtain cluster parameters. The results show that the similarity of cluster parameters obtained through numerical simulations and observations depends significantly on the quality of observational data and photometric accuracy.

  10. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  11. Mean Line Pump Flow Model in Rocket Engine System Simulation

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Lavelle, Thomas M.

    2000-01-01

    A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.

  12. Cloudy-sky Longwave Downward Radiation Estimation by Combining MODIS and AIRS/AMSU Measurements

    NASA Astrophysics Data System (ADS)

    Wang, T.; Shi, J.

    2017-12-01

    Longwave downward radiation (LWDR) is another main energy source received by the earth's surface except solar radiation. Its importance in regulating air temperature and balancing surface energy is enlarged especially under cloudy-sky. Unfortunately, to date, a large number of efforts have been made to derive LWDR from space under only clear-sky conditions leading to difficulty in utilizing space-based LWDR in most models due to its spatio-temporal discontinuity. Currently, only few studies focused on LWDR estimation under cloudy-sky conditions, while their global application is still questionable. In this paper, an alternative strategy is proposed aiming to derive high resolution(1km) cloudy-sky LWDR by fusing collocated satellite multi-sensor measurements. The results show that the newly developed method can work well and can derive LWDR at better accuracy with RMSE<27 W/m2 and bias < 10 W/m2 even under cloudy skies and at 1km scales. By comparing to CALIPSO-CloudSat-CERES-MODIS (CCCM) and SSF products of CERES, MERRA, ERA-interim and NCEP-CSFR products, the new approach demonstrates its superiority in terms of accuracy, temporal variation and spatial distribution pattern of LWDR. The comprehensive comparison analyses also reveal that, except for the proposed product, other four products (CERES, MERRA, ERA-interim and NCEP-CSFR) also show a big difference from each other in the LWDR spatio-temporal distribution pattern and magnitude. The difference between these products can still up to 60W/m2 even at the monthly scale, implying large uncertainties in current LWDR estimations. Besides the higher accuracy of the proposed method, more importantly, it provides unprecedented possibilities for jointly generating high resolution global LWDR datasets by connecting the NASA's Earth Observing System-(EOS) mission (MODIS-AIRS/AMSU) and the Suomi National Polar-orbiting Partnership-(NPP) mission (VIIRS-CrIS/ATMS). Meanwhile, the scheme proposed in this study also gives some clues for multiple data fusing in the remote sensing community.

  13. Sea Ice, Clouds, Sunlight, and Albedo: The Umbrella Versus the Blanket

    NASA Astrophysics Data System (ADS)

    Perovich, D. K.

    2017-12-01

    The Arctic sea ice cover has undergone a major decline in recent years, with reductions in ice extent, ice thickness, and ice age. Understanding the feedbacks and forcing driving these changes is critical in improving predictions. The surface radiation budget plays a central role in summer ice melt and is governed by clouds and surface albedo. Clouds act as an umbrella reducing the downwelling shortwave, but also serve as a blanket increasing the downwelling longwave, with the surface albedo also determining the net balance. Using field observations from the SHEBA program, pairs of clear and cloudy days were selected for each month from May through September and the net radiation flux was calculated for different surface conditions and albedos. To explore the impact of albedo we calculated a break even albedo, where the net radiation for cloudy skies is the same as clear skies. For albedos larger than the break-even value the net radiation flux is smaller under clear skies compared to cloudy skies. Break-even albedos ranged from 0.30 in September to 0.58 in July. For snow covered or bare ice, clear skies always resulted in less radiative heat input. In contrast, leads always had, and ponds usually had, more radiative heat input under clear skies than cloudy skies. Snow covered ice had a net radiation flux that was negative or near zero under clear skies resulting in radiative cooling. We combined the albedo of individual ice types with the area of those ice types to calculate albedos averaged over a 50 km x 50 km area. The July case had the smallest areally averaged albedo of 0.50. This was less than the breakeven albedo, so cloudy skies had a smaller net radiation flux than clear skies. For the cases from the other four months, the areally averaged albedo was greater than the break-even albedo. The areally averaged net radiation flux was negative under clear skies for the May and September cases.

  14. Limb Correction of Infrared Imagery in Cloudy Regions for the Improved Interpretation of RGB Composites

    NASA Technical Reports Server (NTRS)

    Elmer, Nicholas J.; Berndt, Emily; Jedlovec, Gary J.

    2016-01-01

    Red-Green-Blue (RGB) composites (EUMETSAT User Services 2009) combine information from several channels into a single composite image. RGB composites contain the same information as the original channels, but presents the information in a more efficient manner. However, RGB composites derived from infrared imagery of both polar-orbiting and geostationary sensors are adversely affected by the limb effect, which interferes with the qualitative interpretation of RGB composites at large viewing zenith angles. The limb effect, or limb-cooling, is a result of an increase in optical path length of the absorbing atmosphere as viewing zenith angle increases (Goldberg et al. 2001; Joyce et al. 2001; Liu and Weng 2007). As a result, greater atmospheric absorption occurs at the limb, causing the sensor to observe anomalously cooler brightness temperatures. Figure 1 illustrates this effect. In general, limb-cooling results in a 4-11 K decrease in measured brightness temperature (Liu and Weng 2007) depending on the infrared band. For example, water vapor and ozone absorption channels display much larger limb-cooling than infrared window channels. Consequently, RGB composites created from infrared imagery not corrected for limb effects can only be reliably interpreted close to nadir, which reduces the spatial coverage of the available imagery. Elmer (2015) developed a reliable, operational limb correction technique for clear regions. However, many RGB composites are intended to be used and interpreted in cloudy regions, so a limb correction methodology valid for both clear and cloudy regions is needed. This paper presents a limb correction technique valid for both clear and cloudy regions, which is described in Section 2. Section 3 presents several RGB case studies demonstrating the improved functionality of limb-corrected RGBs in both clear and cloudy regions, and Section 4 summarizes and presents the key conclusions of this work.

  15. Remote sensing of PM2.5 during cloudy and nighttime periods using ceilometer backscatter

    NASA Astrophysics Data System (ADS)

    Li, Siwei; Joseph, Everette; Min, Qilong; Yin, Bangsheng; Sakai, Ricardo; Payne, Megan K.

    2017-06-01

    Monitoring PM2.5 (particulate matter with aerodynamic diameter d ≤ 2.5 µm) mass concentration has become of more importance recently because of the negative impacts of fine particles on human health. However, monitoring PM2.5 during cloudy and nighttime periods is difficult since nearly all the passive instruments used for aerosol remote sensing are not able to measure aerosol optical depth (AOD) under either cloudy or nighttime conditions. In this study, an empirical model based on the regression between PM2.5 and the near-surface backscatter measured by ceilometers was developed and tested using 6 years of data (2006 to 2011) from the Howard University Beltsville Campus (HUBC) site. The empirical model can explain ˜ 56, ˜ 34 and ˜ 42 % of the variability in the hourly average PM2.5 during daytime clear, daytime cloudy and nighttime periods, respectively. Meteorological conditions and seasons were found to influence the relationship between PM2.5 mass concentration and the surface backscatter. Overall the model can explain ˜ 48 % of the variability in the hourly average PM2.5 at the HUBC site when considering the seasonal variation. The model also was tested using 4 years of data (2012 to 2015) from the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site, which was geographically and climatologically different from the HUBC site. The results show that the empirical model can explain ˜ 66 and ˜ 82 % of the variability in the daily average PM2.5 at the ARM SGP site and HUBC site, respectively. The findings of this study illustrate the strong need for ceilometer data in air quality monitoring under cloudy and nighttime conditions. Since ceilometers are used broadly over the world, they may provide an important supplemental source of information of aerosols to determine surface PM2.5 concentrations.

  16. Optimizing UV Index determination from broadband irradiances

    NASA Astrophysics Data System (ADS)

    Tereszchuk, Keith A.; Rochon, Yves J.; McLinden, Chris A.; Vaillancourt, Paul A.

    2018-03-01

    A study was undertaken to improve upon the prognosticative capability of Environment and Climate Change Canada's (ECCC) UV Index forecast model. An aspect of that work, and the topic of this communication, was to investigate the use of the four UV broadband surface irradiance fields generated by ECCC's Global Environmental Multiscale (GEM) numerical prediction model to determine the UV Index. The basis of the investigation involves the creation of a suite of routines which employ high-spectral-resolution radiative transfer code developed to calculate UV Index fields from GEM forecasts. These routines employ a modified version of the Cloud-J v7.4 radiative transfer model, which integrates GEM output to produce high-spectral-resolution surface irradiance fields. The output generated using the high-resolution radiative transfer code served to verify and calibrate GEM broadband surface irradiances under clear-sky conditions and their use in providing the UV Index. A subsequent comparison of irradiances and UV Index under cloudy conditions was also performed. Linear correlation agreement of surface irradiances from the two models for each of the two higher UV bands covering 310.70-330.0 and 330.03-400.00 nm is typically greater than 95 % for clear-sky conditions with associated root-mean-square relative errors of 6.4 and 4.0 %. However, underestimations of clear-sky GEM irradiances were found on the order of ˜ 30-50 % for the 294.12-310.70 nm band and by a factor of ˜ 30 for the 280.11-294.12 nm band. This underestimation can be significant for UV Index determination but would not impact weather forecasting. Corresponding empirical adjustments were applied to the broadband irradiances now giving a correlation coefficient of unity. From these, a least-squares fitting was derived for the calculation of the UV Index. The resultant differences in UV indices from the high-spectral-resolution irradiances and the resultant GEM broadband irradiances are typically within 0.2-0.3 with a root-mean-square relative error in the scatter of ˜ 6.6 % for clear-sky conditions. Similar results are reproduced under cloudy conditions with light to moderate clouds, with a relative error comparable to the clear-sky counterpart; under strong attenuation due to clouds, a substantial increase in the root-mean-square relative error of up to 35 % is observed due to differing cloud radiative transfer models.

  17. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  18. The impact of the diurnal cycle on the propagation of Madden-Julian Oscillation convection across the Maritime Continent

    DOE PAGES

    Hagos, Samson M.; Zhang, Chidong; Feng, Zhe; ...

    2016-09-19

    Influences of the diurnal cycle of convection on the propagation of the Madden-Julian Oscillation (MJO) across the Maritime Continent (MC) are examined using cloud-permitting regional model simulations and observations. A pair of ensembles of control (CONTROL) and no-diurnal cycle (NODC) simulations of the November 2011 MJO episode are performed. In the CONTROL simulations, the MJO signal is weakened as it propagates across the MC, with much of the convection stalling over the large islands of Sumatra and Borneo. In the NODC simulations, where the incoming shortwave radiation at the top of the atmosphere is maintained at its daily mean value,more » the MJO signal propagating across the MC is enhanced. Examination of the surface energy fluxes in the simulations indicates that in the presence of the diurnal cycle, surface downwelling shortwave radiation in CONTROL simulations is larger because clouds preferentially form in the afternoon. Furthermore, the diurnal co-variability of surface wind speed and skin temperature results in a larger sensible heat flux and a cooler land surface in CONTROL compared to NODC simulations. Here, an analysis of observations indicates that the modulation of the downwelling shortwave radiation at the surface by the diurnal cycle of cloudiness negatively projects on the MJO intraseasonal cycle and therefore disrupts the propagation of the MJO across the MC.« less

  19. An efficient routine for infrared radiative transfer in a cloudy atmosphere

    NASA Technical Reports Server (NTRS)

    Chou, M. D.; Kouvaris, L.

    1981-01-01

    A FORTRAN program that calculates the atmospheric cooling rate and infrared fluxes for partly cloudy atmospheres is documented. The IR fluxes in the water bands and the 9.6 and 15 micron bands are calculated at 15 levels ranging from 1.39 mb to the surface. The program is generalized to accept any arbitrary atmospheric temperature and humidity profiles and clouds as input and return the cooling rate and fluxes as output. Sample calculations for various atmospheric profiles and cloud situations are demonstrated.

  20. Diagnostics of Rainfall Anomalies in the Nordeste During the Global Weather Experiment

    NASA Technical Reports Server (NTRS)

    Sikdar, D. M.

    1984-01-01

    The relationship of the daily variability of large-scale pressure, cloudiness and upper level wind patterns over the Brazil-Atlantic sector during March/April 1979 to rainfall anomalies in northern Nordeste was investigated. The experiment divides the rainy season (March/April) of 1979 into wet and dry days, then composites bright cloudiness, sea level pressure, and upper level wind fields with respect to persistent rainfall episodes. Wet and dry anomalies are analyzed along with seasonal mean conditions.

  1. Solar energy microclimate as determined from satellite observations

    NASA Technical Reports Server (NTRS)

    Vonder Haar, T. H.; Ellis, J. S.

    1975-01-01

    A method is presented for determining solar insolation at the earth's surface using satellite broadband visible radiance and cloud imagery data, along with conventional in situ measurements. Conventional measurements are used to both tune satellite measurements and to develop empirical relationships between satellite observations and surface solar insolation. Cloudiness is the primary modulator of sunshine. The satellite measurements as applied in this method consider cloudiness both explicitly and implicitly in determining surface solar insolation at space scales smaller than the conventional pyranometer network.

  2. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  3. Handbook on the Climate of the USSR. Issue 14. Georgian SSR. Part 5. Cloud Conditions and Atmospheric Phenomena

    DTIC Science & Technology

    1993-03-11

    balls). (b). from - to. (c). Year. (151). Duripshi. (151a). General. (151b). Lower. (152). Pitsunda, beacon. (153). Zemo -Azhara. (154). Kvemo-Azhara...58 27 9 64 28 14 58 DOC = 92083707 PAGE .4e Key: (a). Month. (b). hours. (c). Cloudiness (balls). (d). from- to. (149). Gagra. (153). Zemo -Azhara. (155...Cloudiness (balls). (d). from - to. (149). Gagra. (153). Zemo -Azhara. (155). Gudauta. O DOC = 92083708 PAGE Or Page 130. Continuation of Table 3

  4. Three-dimensional Monte-Carlo simulation of gamma-ray scattering and production in the atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, D.J.

    1989-05-15

    Monte Carlo codes have been developed to simulate gamma-ray scattering and production in the atmosphere. The scattering code simulates interactions of low-energy gamma rays (20 to several hundred keV) from an astronomical point source in the atmosphere; a modified code also simulates scattering in a spacecraft. Four incident spectra, typical of gamma-ray bursts, solar flares, and the Crab pulsar, and 511 keV line radiation have been studied. These simulations are consistent with observations of solar flare radiation scattered from the atmosphere. The production code simulates the interactions of cosmic rays which produce high-energy (above 10 MeV) photons and electrons. Itmore » has been used to calculate gamma-ray and electron albedo intensities at Palestine, Texas and at the equator; the results agree with observations in most respects. With minor modifications this code can be used to calculate intensities of other high-energy particles. Both codes are fully three-dimensional, incorporating a curved atmosphere; the production code also incorporates the variation with both zenith and azimuth of the incident cosmic-ray intensity due to geomagnetic effects. These effects are clearly reflected in the calculated albedo by intensity contrasts between the horizon and nadir, and between the east and west horizons.« less

  5. Development of MCNPX-ESUT computer code for simulation of neutron/gamma pulse height distribution

    NASA Astrophysics Data System (ADS)

    Abolfazl Hosseini, Seyed; Vosoughi, Naser; Zangian, Mehdi

    2015-05-01

    In this paper, the development of the MCNPX-ESUT (MCNPX-Energy Engineering of Sharif University of Technology) computer code for simulation of neutron/gamma pulse height distribution is reported. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry in mixed neutron/gamma fields, this type of detectors is selected for simulation in the present study. The proposed algorithm for simulation includes four main steps. The first step is the modeling of the neutron/gamma particle transport and their interactions with the materials in the environment and detector volume. In the second step, the number of scintillation photons due to charged particles such as electrons, alphas, protons and carbon nuclei in the scintillator material is calculated. In the third step, the transport of scintillation photons in the scintillator and lightguide is simulated. Finally, the resolution corresponding to the experiment is considered in the last step of the simulation. Unlike the similar computer codes like SCINFUL, NRESP7 and PHRESP, the developed computer code is applicable to both neutron and gamma sources. Hence, the discrimination of neutron and gamma in the mixed fields may be performed using the MCNPX-ESUT computer code. The main feature of MCNPX-ESUT computer code is that the neutron/gamma pulse height simulation may be performed without needing any sort of post processing. In the present study, the pulse height distributions due to a monoenergetic neutron/gamma source in NE-213 detector using MCNPX-ESUT computer code is simulated. The simulated neutron pulse height distributions are validated through comparing with experimental data (Gohil et al. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 664 (2012) 304-309.) and the results obtained from similar computer codes like SCINFUL, NRESP7 and Geant4. The simulated gamma pulse height distribution for a 137Cs source is also compared with the experimental data.

  6. Object-oriented approach for gas turbine engine simulation

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.; Felder, James L.

    1995-01-01

    An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.

  7. Two-dimensional implosion simulations with a kinetic particle code [2D implosion simulations with a kinetic particle code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagert, Irina; Even, Wesley Paul; Strother, Terrance Timothy

    Here, we perform two-dimensional implosion simulations using a Monte Carlo kinetic particle code. The application of a kinetic transport code is motivated, in part, by the occurrence of nonequilibrium effects in inertial confinement fusion capsule implosions, which cannot be fully captured by hydrodynamic simulations. Kinetic methods, on the other hand, are able to describe both continuum and rarefied flows. We perform simple two-dimensional disk implosion simulations using one-particle species and compare the results to simulations with the hydrodynamics code rage. The impact of the particle mean free path on the implosion is also explored. In a second study, we focusmore » on the formation of fluid instabilities from induced perturbations. We find good agreement with hydrodynamic studies regarding the location of the shock and the implosion dynamics. Differences are found in the evolution of fluid instabilities, originating from the higher resolution of rage and statistical noise in the kinetic studies.« less

  8. Two-dimensional implosion simulations with a kinetic particle code [2D implosion simulations with a kinetic particle code

    DOE PAGES

    Sagert, Irina; Even, Wesley Paul; Strother, Terrance Timothy

    2017-05-17

    Here, we perform two-dimensional implosion simulations using a Monte Carlo kinetic particle code. The application of a kinetic transport code is motivated, in part, by the occurrence of nonequilibrium effects in inertial confinement fusion capsule implosions, which cannot be fully captured by hydrodynamic simulations. Kinetic methods, on the other hand, are able to describe both continuum and rarefied flows. We perform simple two-dimensional disk implosion simulations using one-particle species and compare the results to simulations with the hydrodynamics code rage. The impact of the particle mean free path on the implosion is also explored. In a second study, we focusmore » on the formation of fluid instabilities from induced perturbations. We find good agreement with hydrodynamic studies regarding the location of the shock and the implosion dynamics. Differences are found in the evolution of fluid instabilities, originating from the higher resolution of rage and statistical noise in the kinetic studies.« less

  9. Toward a first-principles integrated simulation of tokamak edge plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, C S; Klasky, Scott A; Cummings, Julian

    2008-01-01

    Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less

  10. OSIRIS - an object-oriented parallel 3D PIC code for modeling laser and particle beam-plasma interaction

    NASA Astrophysics Data System (ADS)

    Hemker, Roy

    1999-11-01

    The advances in computational speed make it now possible to do full 3D PIC simulations of laser plasma and beam plasma interactions, but at the same time the increased complexity of these problems makes it necessary to apply modern approaches like object oriented programming to the development of simulation codes. We report here on our progress in developing an object oriented parallel 3D PIC code using Fortran 90. In its current state the code contains algorithms for 1D, 2D, and 3D simulations in cartesian coordinates and for 2D cylindrically-symmetric geometry. For all of these algorithms the code allows for a moving simulation window and arbitrary domain decomposition for any number of dimensions. Recent 3D simulation results on the propagation of intense laser and electron beams through plasmas will be presented.

  11. 5D Tempest simulations of kinetic edge turbulence

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.; Xiong, Z.; Cohen, B. I.; Cohen, R. H.; Dorr, M. R.; Hittinger, J. A.; Kerbel, G. D.; Nevins, W. M.; Rognlien, T. D.; Umansky, M. V.; Qin, H.

    2006-10-01

    Results are presented from the development and application of TEMPEST, a nonlinear five dimensional (3d2v) gyrokinetic continuum code. The simulation results and theoretical analysis include studies of H-mode edge plasma neoclassical transport and turbulence in real divertor geometry and its relationship to plasma flow generation with zero external momentum input, including the important orbit-squeezing effect due to the large electric field flow-shear in the edge. In order to extend the code to 5D, we have formulated a set of fully nonlinear electrostatic gyrokinetic equations and a fully nonlinear gyrokinetic Poisson's equation which is valid for both neoclassical and turbulence simulations. Our 5D gyrokinetic code is built on 4D version of Tempest neoclassical code with extension to a fifth dimension in binormal direction. The code is able to simulate either a full torus or a toroidal segment. Progress on performing 5D turbulence simulations will be reported.

  12. Quality improvement utilizing in-situ simulation for a dual-hospital pediatric code response team.

    PubMed

    Yager, Phoebe; Collins, Corey; Blais, Carlene; O'Connor, Kathy; Donovan, Patricia; Martinez, Maureen; Cummings, Brian; Hartnick, Christopher; Noviski, Natan

    2016-09-01

    Given the rarity of in-hospital pediatric emergency events, identification of gaps and inefficiencies in the code response can be difficult. In-situ, simulation-based medical education programs can identify unrecognized systems-based challenges. We hypothesized that developing an in-situ, simulation-based pediatric emergency response program would identify latent inefficiencies in a complex, dual-hospital pediatric code response system and allow rapid intervention testing to improve performance before implementation at an institutional level. Pediatric leadership from two hospitals with a shared pediatric code response team employed the Institute for Healthcare Improvement's (IHI) Breakthrough Model for Collaborative Improvement to design a program consisting of Plan-Do-Study-Act cycles occurring in a simulated environment. The objectives of the program were to 1) identify inefficiencies in our pediatric code response; 2) correlate to current workflow; 3) employ an iterative process to test quality improvement interventions in a safe environment; and 4) measure performance before actual implementation at the institutional level. Twelve dual-hospital, in-situ, simulated, pediatric emergencies occurred over one year. The initial simulated event allowed identification of inefficiencies including delayed provider response, delayed initiation of cardiopulmonary resuscitation (CPR), and delayed vascular access. These gaps were linked to process issues including unreliable code pager activation, slow elevator response, and lack of responder familiarity with layout and contents of code cart. From first to last simulation with multiple simulated process improvements, code response time for secondary providers coming from the second hospital decreased from 29 to 7 min, time to CPR initiation decreased from 90 to 15 s, and vascular access obtainment decreased from 15 to 3 min. Some of these simulated process improvements were adopted into the institutional response while others continue to be trended over time for evidence that observed changes represent a true new state of control. Utilizing the IHI's Breakthrough Model, we developed a simulation-based program to 1) successfully identify gaps and inefficiencies in a complex, dual-hospital, pediatric code response system and 2) provide an environment in which to safely test quality improvement interventions before institutional dissemination. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Synergistic Use of MODIS and AIRS in a Variational Retrieval of Cloud Parameters.

    NASA Astrophysics Data System (ADS)

    Li, Jun; Menzel, W. Paul; Zhang, Wenjian; Sun, Fengying; Schmit, Timothy J.; Gurka, James J.; Weisz, Elisabeth

    2004-11-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) and the Atmospheric Infrared Sounder (AIRS) measurements from the Earth Observing System's (EOS's) Aqua satellite enable global monitoring of the distribution of clouds. MODIS is able to provide a cloud mask, surface and cloud types, cloud phase, cloud-top pressure (CTP), effective cloud amount (ECA), cloud particle size, and cloud optical thickness at high spatial resolution (1 5 km). The combined MODIS AIRS system offers the opportunity for improved cloud products, better than from either system alone; this improvement is demonstrated in this paper with both simulated and real radiances. A one-dimensional variational (1DVAR) methodology is used to retrieve the CTP and ECA from AIRS longwave (650 790 cm-1 or 15.38 12.65 μm) cloudy radiance measurements (hereinafter referred to as MODIS AIRS 1DVAR). The MODIS AIRS 1DVAR cloud properties show significant improvement over the MODIS-alone cloud properties and slight improvement over AIRS-alone cloud properties in a simulation study, while MODIS AIRS 1DVAR is much more computationally efficient than the AIRS-alone 1DVAR; comparisons with radiosonde observations show that CTPs improve by 10 40 hPa for MODIS AIRS CTPs over those from MODIS alone. The 1DVAR approach is applied to process the AIRS longwave cloudy radiance measurements; results are compared with MODIS and Geostationary Operational Environmental Satellite sounder cloud products. Data from ground-based instrumentation at the Atmospheric Radiation Measurement Program Cloud and Radiation Test Bed in Oklahoma are used for validation; results show that MODIS AIRS improves the MODIS CTP, especially in low-level clouds. The operational use of a high-spatial-resolution imager, along with information from a high-spectral-resolution sounder will be possible with instruments planned for the next-generation geostationary operational instruments.


  14. COCOA code for creating mock observations of star cluster models

    NASA Astrophysics Data System (ADS)

    Askar, Abbas; Giersz, Mirek; Pych, Wojciech; Dalessandro, Emanuele

    2018-04-01

    We introduce and present results from the COCOA (Cluster simulatiOn Comparison with ObservAtions) code that has been developed to create idealized mock photometric observations using results from numerical simulations of star cluster evolution. COCOA is able to present the output of realistic numerical simulations of star clusters carried out using Monte Carlo or N-body codes in a way that is useful for direct comparison with photometric observations. In this paper, we describe the COCOA code and demonstrate its different applications by utilizing globular cluster (GC) models simulated with the MOCCA (MOnte Carlo Cluster simulAtor) code. COCOA is used to synthetically observe these different GC models with optical telescopes, perform point spread function photometry, and subsequently produce observed colour-magnitude diagrams. We also use COCOA to compare the results from synthetic observations of a cluster model that has the same age and metallicity as the Galactic GC NGC 2808 with observations of the same cluster carried out with a 2.2 m optical telescope. We find that COCOA can effectively simulate realistic observations and recover photometric data. COCOA has numerous scientific applications that maybe be helpful for both theoreticians and observers that work on star clusters. Plans for further improving and developing the code are also discussed in this paper.

  15. Simulation of Weld Mechanical Behavior to Include Welding-Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes

    DTIC Science & Technology

    2015-11-01

    induced residual stresses and distortions from weld simulations in the SYSWELD software code in structural Finite Element Analysis ( FEA ) simulations...performed in the Abaqus FEA code is presented. The translation of these results is accomplished using a newly developed Python script. Full details of...Local Weld Model in Structural FEA ....................................................15 CONCLUSIONS

  16. Impact of elevated CO2 concentration on dynamics of leaf photosynthesis in Fagus sylvatica is modulated by sky conditions.

    PubMed

    Urban, Otmar; Klem, Karel; Holišová, Petra; Šigut, Ladislav; Šprtová, Mirka; Teslová-Navrátilová, Petra; Zitová, Martina; Špunda, Vladimír; Marek, Michal V; Grace, John

    2014-02-01

    It has been suggested that atmospheric CO2 concentration and frequency of cloud cover will increase in future. It remains unclear, however, how elevated CO2 influences photosynthesis under complex clear versus cloudy sky conditions. Accordingly, diurnal changes in photosynthetic responses among beech trees grown at ambient (AC) and doubled (EC) CO2 concentrations were studied under contrasting sky conditions. EC stimulated the daily sum of fixed CO2 and light use efficiency under clear sky. Meanwhile, both these parameters were reduced under cloudy sky as compared with AC treatment. Reduction in photosynthesis rate under cloudy sky was particularly associated with EC-stimulated, xanthophyll-dependent thermal dissipation of absorbed light energy. Under clear sky, a pronounced afternoon depression of CO2 assimilation rate was found in sun-adapted leaves under EC compared with AC conditions. This was caused in particular by stomata closure mediated by vapour pressure deficit. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. A comparative study between spiral-filter press and belt press implemented in a cloudy apple juice production process.

    PubMed

    De Paepe, Domien; Coudijzer, Katleen; Noten, Bart; Valkenborg, Dirk; Servaes, Kelly; De Loose, Marc; Diels, Ludo; Voorspoels, Stefan; Van Droogenbroeck, Bart

    2015-04-15

    In this study, advantages and disadvantages of the innovative, low-oxygen spiral-filter press system were studied in comparison with the belt press, commonly applied in small and medium size enterprises for the production of cloudy apple juice. On the basis of equivalent throughput, a higher juice yield could be achieved with spiral-filter press. Also a more turbid juice with a higher content of suspended solids could be produced. The avoidance of enzymatic browning during juice extraction led to an attractive yellowish juice with an elevated phenolic content. Moreover, it was found that juice produced with spiral-filter press demonstrates a higher retention of phenolic compounds during the downstream processing steps and storage. The results demonstrates the advantage of the use of a spiral-filter press in comparison with belt press in the production of a high quality cloudy apple juice rich in phenolic compounds, without the use of oxidation inhibiting additives. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. TOGA COARE Satellite data summaries available on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Chen, S. S.; Houze, R. A., Jr.; Mapes, B. E.; Brodzick, S. R.; Yutler, S. E.

    1995-01-01

    Satellite data summary images and analysis plots from the Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE), which were initially prepared in the field at the Honiara Operations Center, are now available on the Internet via World Wide Web browsers such as Mosaic. These satellite data summaries consist of products derived from the Japanese Geosynchronous Meteorological Satellite IR data: a time-size series of the distribution of contiguous cold cloudiness areas, weekly percent high cloudiness (PHC) maps, and a five-month time-longitudinal diagram illustrating the zonal motion of large areas of cold cloudiness. The weekly PHC maps are overlaid with weekly mean 850-hPa wind calculated from the European Centre for Medium-Range Weather Forecasts (ECMWF) global analysis field and can be viewed as an animation loop. These satellite summaries provide an overview of spatial and temporal variabilities of the cloud population and a large-scale context for studies concerning specific processes of various components of TOGA COARE.

  19. Numerical simulation of experiments in the Giant Planet Facility

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Davy, W. C.

    1979-01-01

    Utilizing a series of existing computer codes, ablation experiments in the Giant Planet Facility are numerically simulated. Of primary importance is the simulation of the low Mach number shock layer that envelops the test model. The RASLE shock-layer code, used in the Jupiter entry probe heat-shield design, is adapted to the experimental conditions. RASLE predictions for radiative and convective heat fluxes are in good agreement with calorimeter measurements. In simulating carbonaceous ablation experiments, the RASLE code is coupled directly with the CMA material response code. For the graphite models, predicted and measured recessions agree very well. Predicted recession for the carbon phenolic models is 50% higher than that measured. This is the first time codes used for the Jupiter probe design have been compared with experiments.

  20. Convenient models of the atmosphere: optics and solar radiation

    NASA Astrophysics Data System (ADS)

    Alexander, Ginsburg; Victor, Frolkis; Irina, Melnikova; Sergey, Novikov; Dmitriy, Samulenkov; Maxim, Sapunov

    2017-11-01

    Simple optical models of clear and cloudy atmosphere are proposed. Four versions of atmospheric aerosols content are considered: a complete lack of aerosols in the atmosphere, low background concentration (500 cm-3), high concentrations (2000 cm-3) and very high content of particles (5000 cm-3). In a cloud scenario, the model of external mixture is assumed. The values of optical thickness and single scattering albedo for 13 wavelengths are calculated in the short wavelength range of 0.28-0.90 µm, with regard to the molecular absorption bands, that is simulated with triangle function. A comparison of the proposed optical parameters with results of various measurements and retrieval (lidar measurement, sampling, processing radiation measurements) is presented. For a cloudy atmosphere models of single-layer and two-layer atmosphere are proposed. It is found that cloud optical parameters with assuming the "external mixture" agrees with retrieved values from airborne observations. The results of calculating hemispherical fluxes of the reflected and transmitted solar radiation and the radiative divergence are obtained with the Delta-Eddington approach. The calculation is done for surface albedo values of 0, 0.5, 0.9 and for spectral values of the sandy surface. Four values of solar zenith angle: 0°, 30°, 40° and 60° are taken. The obtained values are compared with data of radiative airborne observations. Estimating the local instantaneous radiative forcing of atmospheric aerosols and clouds for considered models is presented together with the heating rate.

  1. Factors driving mercury variability in the Arctic atmosphere and ocean over the past 30 years

    NASA Astrophysics Data System (ADS)

    Fisher, Jenny A.; Jacob, Daniel J.; Soerensen, Anne L.; Amos, Helen M.; Corbitt, Elizabeth S.; Streets, David G.; Wang, Qiaoqiao; Yantosca, Robert M.; Sunderland, Elsie M.

    2013-12-01

    observations at Arctic sites (Alert and Zeppelin) show large interannual variability (IAV) in atmospheric mercury (Hg), implying a strong sensitivity of Hg to environmental factors and potentially to climate change. We use the GEOS-Chem global biogeochemical Hg model to interpret these observations and identify the principal drivers of spring and summer IAV in the Arctic atmosphere and surface ocean from 1979-2008. The model has moderate skill in simulating the observed atmospheric IAV at the two sites (r 0.4) and successfully reproduces a long-term shift at Alert in the timing of the spring minimum from May to April (r = 0.7). Principal component analysis indicates that much of the IAV in the model can be explained by a single climate mode with high temperatures, low sea ice fraction, low cloudiness, and shallow boundary layer. This mode drives decreased bromine-driven deposition in spring and increased ocean evasion in summer. In the Arctic surface ocean, we find that the IAV for modeled total Hg is dominated by the meltwater flux of Hg previously deposited to sea ice, which is largest in years with high solar radiation (clear skies) and cold spring air temperature. Climate change in the Arctic is projected to result in increased cloudiness and strong warming in spring, which may thus lead to decreased Hg inputs to the Arctic Ocean. The effect of climate change on Hg discharges from Arctic rivers remains a major source of uncertainty.

  2. Quantifying and Modelling the Effect of Cloud Shadows on the Surface Irradiance at Tropical and Midlatitude Forests

    NASA Astrophysics Data System (ADS)

    Kivalov, Sergey N.; Fitzjarrald, David R.

    2018-02-01

    Cloud shadows lead to alternating light and dark periods at the surface, with the most abrupt changes occurring in the presence of low-level forced cumulus clouds. We examine multiyear irradiance time series observed at a research tower in a midlatitude mixed deciduous forest (Harvard Forest, Massachusetts, USA: 42.53{°}N, 72.17{°}W) and one made at a similar tower in a tropical rain forest (Tapajós National Forest, Pará, Brazil: 2.86{°}S, 54.96{°}W). We link the durations of these periods statistically to conventional meteorological reports of sky type and cloud height at the two forests and present a method to synthesize the surface irradiance time series from sky-type information. Four classes of events describing distinct sequential irradiance changes at the transition from cloud shadow and direct sunlight are identified: sharp-to-sharp, slow-to-slow, sharp-to-slow, and slow-to-sharp. Lognormal and the Weibull statistical distributions distinguish among cloudy-sky types. Observers' qualitative reports of `scattered' and `broken' clouds are quantitatively distinguished by a threshold value of the ratio of mean clear to cloudy period durations. Generated synthetic time series based on these statistics adequately simulate the temporal "radiative forcing" linked to sky type. Our results offer a quantitative way to connect the conventional meteorological sky type to the time series of irradiance experienced at the surface.

  3. Sensitivity of aerosol loading and properties to cloudiness

    NASA Astrophysics Data System (ADS)

    Iversen, T.; Seland, O.; Kirkevag, A.; Kristjansson, J. E.

    2005-12-01

    Clouds influence aerosols in various ways. Sulfate is swiftly produced in liquid phase provided there is both sulfur dioxide and oxidants available. Nucleation and Aitken mode aerosol particles efficiently grow in size by collision and coagulation with cloud droplets. When precipitation is formed, aerosol and precursor gases may be quickly removed bay rainout. The dynamics associated with clouds in some cases may swiftly mix aerosols deeply into the troposphere. In some cases Aitken-mode particles may be formed in cloud droplets by splitting agglomerates of particulate matter such as black carbon In this presentation we will discuss how global cloudiness may influence the burden, residence time, and spatial distribution of sulfate, black carbon and particulate organic matter. A similar physico-chemical scheme for there compounds has been implemented in three generations of the NCAR community climate model (CCM3, CAM2 and CAM3). The scheme is documented in the literature and is a part of the Aerocom-intercomparison. There are many differences between these models. With respect to aerosols, a major difference is that CAM3 has a considerably higher global cloud volume and more then twice the amount of cloud water than CAM2 and CCM3. Atmospheric simulations have been made with prescribed ocean temperatures. It is slightly surprising to discover that certain aspects of the aerosols are not particularly sensitive to these differences in cloud availability. This sensitivity will be compared to sensitivities with respect to processing in deep convective clouds.

  4. POTENTIAL IMPACT OF TANK F FLUSH SOLUTION ON H-CANYON EVAPORATOR OPERATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyser, E.; Fondeur, F.; Fink, S.

    2010-09-13

    Previous chemical analysis of a sample from the liquid heel found in Tank F of the High Activity Drain (HAD) system in F/H laboratory revealed the presence of n-paraffin, tributyl phosphate (TBP), Modifier from the Modular Caustic-Side Solvent Extraction Unit (MCU) process and a vinyl ester resin that is very similar to the protective lining on Tank F. Subsequent analyses detected the presence of a small amount of diisopropylnaphthalene (DIN) (major component of Ultima Gold{trademark} AB liquid scintillation cocktail). Indications are that both vinyl ester resin and DIN are present in small amounts in the flush solution. The flush solutionmore » currently in the LR-56S trailer likely has an emulsion which is believed to contain a mixture of the reported organic species dominated by TBP. An acid treatment similar to that proposed to clear the HAD tank heel in F/H laboratory was found to allow separation of an organic phase from the cloudy sample tested by SRNL. Mixing of that clear sample did re-introduce some cloudiness that did not immediately clear but that cloudiness is attributed to the DIN in the matrix. An organic phase does quickly separate from the cloudy matrix allowing separation by a box decanter in H-Canyon prior to transfer to the evaporator feed tank. This separation should proceed normally as long as the emulsion is broken-up by acidification.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heng, Kevin, E-mail: kevin.heng@csh.unibe.ch

    We present a dimensionless index that quantifies the degree of cloudiness of the atmosphere of a transiting exoplanet. Our cloudiness index is based on measuring the transit radii associated with the line center and wing of the sodium or potassium line. In deriving this index, we revisited the algebraic formulae for inferring the isothermal pressure scale height from transit measurements. We demonstrate that the formulae of Lecavelier et al. and Benneke and Seager are identical: the former is inferring the temperature while assuming a value for the mean molecular mass and the latter is inferring the mean molecular mass whilemore » assuming a value for the temperature. More importantly, these formulae cannot be used to distinguish between cloudy and cloud-free atmospheres. We derive values of our cloudiness index for a small sample of seven hot Saturns/Jupiters taken from Sing et al. We show that WASP-17b, WASP-31b, and HAT-P-1b are nearly cloud-free at visible wavelengths. We find the tentative trend that more irradiated atmospheres tend to have fewer clouds consisting of sub-micron-sized particles. We also derive absolute sodium and/or potassium abundances ∼10{sup 2} cm{sup −3} for WASP-17b, WASP-31b, and HAT-P-1b (and upper limits for the other objects). Higher-resolution measurements of both the sodium and potassium lines, for a larger sample of exoplanetary atmospheres, are needed to confirm or refute this trend.« less

  6. Computer Simulation of the VASIMR Engine

    NASA Technical Reports Server (NTRS)

    Garrison, David

    2005-01-01

    The goal of this project is to develop a magneto-hydrodynamic (MHD) computer code for simulation of the VASIMR engine. This code is designed be easy to modify and use. We achieve this using the Cactus framework, a system originally developed for research in numerical relativity. Since its release, Cactus has become an extremely powerful and flexible open source framework. The development of the code will be done in stages, starting with a basic fluid dynamic simulation and working towards a more complex MHD code. Once developed, this code can be used by students and researchers in order to further test and improve the VASIMR engine.

  7. Aerodynamic Analysis of the M33 Projectile Using the CFX Code

    DTIC Science & Technology

    2011-12-01

    is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words) The M33 projectile has been analyzed using the ANSYS CFX code that is based...analyzed using the ANSYS CFX code that is based on the numerical solution of the full Navier-Stokes equations. Simulation data were obtained...using the CFX code. The ANSYS - CFX code is a commercial CFD program used to simulate fluid flow in a variety of applications such as gas turbine

  8. Muon simulation codes MUSIC and MUSUN for underground physics

    NASA Astrophysics Data System (ADS)

    Kudryavtsev, V. A.

    2009-03-01

    The paper describes two Monte Carlo codes dedicated to muon simulations: MUSIC (MUon SImulation Code) and MUSUN (MUon Simulations UNderground). MUSIC is a package for muon transport through matter. It is particularly useful for propagating muons through large thickness of rock or water, for instance from the surface down to underground/underwater laboratory. MUSUN is designed to use the results of muon transport through rock/water to generate muons in or around underground laboratory taking into account their energy spectrum and angular distribution.

  9. Simulation Studies for Inspection of the Benchmark Test with PATRASH

    NASA Astrophysics Data System (ADS)

    Shimosaki, Y.; Igarashi, S.; Machida, S.; Shirakata, M.; Takayama, K.; Noda, F.; Shigaki, K.

    2002-12-01

    In order to delineate the halo-formation mechanisms in a typical FODO lattice, a 2-D simulation code PATRASH (PArticle TRAcking in a Synchrotron for Halo analysis) has been developed. The electric field originating from the space charge is calculated by the Hybrid Tree code method. Benchmark tests utilizing three simulation codes of ACCSIM, PATRASH and SIMPSONS were carried out. These results have been confirmed to be fairly in agreement with each other. The details of PATRASH simulation are discussed with some examples.

  10. Validation: Codes to compare simulation data to various observations

    NASA Astrophysics Data System (ADS)

    Cohn, J. D.

    2017-02-01

    Validation provides codes to compare several observations to simulated data with stellar mass and star formation rate, simulated data stellar mass function with observed stellar mass function from PRIMUS or SDSS-GALEX in several redshift bins from 0.01-1.0, and simulated data B band luminosity function with observed stellar mass function, and to create plots for various attributes, including stellar mass functions, and stellar mass to halo mass. These codes can model predictions (in some cases alongside observational data) to test other mock catalogs.

  11. NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML

    NASA Technical Reports Server (NTRS)

    Stueber, Thomas J.; Paxson, Daniel E.

    2014-01-01

    The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.

  12. Accuracy of the hypothetical sky-polarimetric Viking navigation versus sky conditions: revealing solar elevations and cloudinesses favourable for this navigation method

    NASA Astrophysics Data System (ADS)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Blahó, Miklós; Egri, Ádám; Szabó, Gyula; Horváth, Gábor

    2017-09-01

    According to Thorkild Ramskou's theory proposed in 1967, under overcast and foggy skies, Viking seafarers might have used skylight polarization analysed with special crystals called sunstones to determine the position of the invisible Sun. After finding the occluded Sun with sunstones, its elevation angle had to be measured and its shadow had to be projected onto the horizontal surface of a sun compass. According to Ramskou's theory, these sunstones might have been birefringent calcite or dichroic cordierite or tourmaline crystals working as polarizers. It has frequently been claimed that this method might have been suitable for navigation even in cloudy weather. This hypothesis has been accepted and frequently cited for decades without any experimental support. In this work, we determined the accuracy of this hypothetical sky-polarimetric Viking navigation for 1080 different sky situations characterized by solar elevation θ and cloudiness ρ, the sky polarization patterns of which were measured by full-sky imaging polarimetry. We used the earlier measured uncertainty functions of the navigation steps 1, 2 and 3 for calcite, cordierite and tourmaline sunstone crystals, respectively, and the newly measured uncertainty function of step 4 presented here. As a result, we revealed the meteorological conditions under which Vikings could have used this hypothetical navigation method. We determined the solar elevations at which the navigation uncertainties are minimal at summer solstice and spring equinox for all three sunstone types. On average, calcite sunstone ensures a more accurate sky-polarimetric navigation than tourmaline and cordierite. However, in some special cases (generally at 35° ≤ θ ≤ 40°, 1 okta ≤ ρ ≤ 6 oktas for summer solstice, and at 20° ≤ θ ≤ 25°, 0 okta ≤ ρ ≤ 4 oktas for spring equinox), the use of tourmaline and cordierite results in smaller navigation uncertainties than that of calcite. Generally, under clear or less cloudy skies, the sky-polarimetric navigation is more accurate, but at low solar elevations its accuracy remains relatively large even at high cloudiness. For a given ρ, the absolute value of averaged peak North uncertainties dramatically decreases with increasing θ until the sign (±) change of these uncertainties. For a given θ, this absolute value can either decrease or increase with increasing ρ. The most advantageous sky situations for this navigation method are at summer solstice when the solar elevation and cloudiness are 35° ≤ θ ≤ 40° and 2 oktas ≤ ρ ≤ 3 oktas.

  13. Visual Computing Environment Workshop

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles (Compiler)

    1998-01-01

    The Visual Computing Environment (VCE) is a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis.

  14. The UPSF code: a metaprogramming-based high-performance automatically parallelized plasma simulation framework

    NASA Astrophysics Data System (ADS)

    Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao

    2017-10-01

    UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.

  15. Electron-beam-ion-source (EBIS) modeling progress at FAR-TECH, Inc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J. S., E-mail: kim@far-tech.com; Zhao, L., E-mail: kim@far-tech.com; Spencer, J. A., E-mail: kim@far-tech.com

    FAR-TECH, Inc. has been developing a numerical modeling tool for Electron-Beam-Ion-Sources (EBISs). The tool consists of two codes. One is the Particle-Beam-Gun-Simulation (PBGUNS) code to simulate a steady state electron beam and the other is the EBIS-Particle-In-Cell (EBIS-PIC) code to simulate ion charge breeding with the electron beam. PBGUNS, a 2D (r,z) electron gun and ion source simulation code, has been extended for efficient modeling of EBISs and the work was presented previously. EBIS-PIC is a space charge self-consistent PIC code and is written to simulate charge breeding in an axisymmetric 2D (r,z) device allowing for full three-dimensional ion dynamics.more » This 2D code has been successfully benchmarked with Test-EBIS measurements at Brookhaven National Laboratory. For long timescale (< tens of ms) ion charge breeding, the 2D EBIS-PIC simulations take a long computational time making the simulation less practical. Most of the EBIS charge breeding, however, may be modeled in 1D (r) as the axial dependence of the ion dynamics may be ignored in the trap. Where 1D approximations are valid, simulations of charge breeding in an EBIS over long time scales become possible, using EBIS-PIC together with PBGUNS. Initial 1D results are presented. The significance of the magnetic field to ion dynamics, ion cooling effects due to collisions with neutral gas, and the role of Coulomb collisions are presented.« less

  16. Convolutional coding results for the MVM '73 X-band telemetry experiment

    NASA Technical Reports Server (NTRS)

    Layland, J. W.

    1978-01-01

    Results of simulation of several short-constraint-length convolutional codes using a noisy symbol stream obtained via the turnaround ranging channels of the MVM'73 spacecraft are presented. First operational use of this coding technique is on the Voyager mission. The relative performance of these codes in this environment is as previously predicted from computer-based simulations.

  17. Implementation and evaluation of a simulation curriculum for paediatric residency programs including just-in-time in situ mock codes.

    PubMed

    Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam

    2012-02-01

    To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. Paediatric residency program at BC Children's Hospital, Vancouver, British Columbia. The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes.

  18. Measurements of Atmospheric CO2 Column in Cloudy Weather Conditions using An IM-CW Lidar at 1.57 Micron

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Obland, Michael; Harrison, F. Wallace; Nehrir, Amin; Browell, Edward; Campbell, Joel; Dobler, Jeremy; Meadows, Bryon; Fan, Tai-Fang; Kooi, Susan; hide

    2015-01-01

    This study evaluates the capability of atmospheric CO2 column measurements under cloudy conditions using an airborne intensity-modulated continuous-wave integrated-path-differential-absorption lidar operating in the 1.57-m CO2 absorption band. The atmospheric CO2 column amounts from the aircraft to the tops of optically thick cumulus clouds and to the surface in the presence of optically thin clouds are retrieved from lidar data obtained during the summer 2011 and spring 2013 flight campaigns, respectively.

  19. Visual Computing Environment

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Putt, Charles W.

    1997-01-01

    The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.

  20. Numerical experiments on the role of radiative processes in the development and maintenance of upper level clouds

    NASA Technical Reports Server (NTRS)

    Starr, D. O'C.; Cox, S. K.

    1981-01-01

    A time-dependent, two-dimensional Eulerian model is presented whose purpose is to obtain more realistic parameterizations of extended high level cloudiness, and the results of a numerical experiment using the model are reported. The model is anelastic and the Bousinesque assumption is invoked. Unresolved subgrid scale processes are parameterized as eddy diffusion processes. Two phases of water are incorporated and equilibrium between them is assumed. The effects of infrared radiative processes are parametrically represented. Two simulations were conducted with identical initial conditions; in one of them, the radiation term was never turned on. The mean values of perturbation potential temperature at each level in the domain are plotted versus height after 15, 30, and 60 minutes of simulated time. The influence of the radiative term is seen to impose a cooling trend, leading to an increased generation of ice water and an increased generation of turbulent kinetic energy in the cloud layer.

  1. Research Review, 1983

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The Global Modeling and Simulation Branch (GMSB) of the Laboratory for Atmospheric Sciences (GLAS) is engaged in general circulation modeling studies related to global atmospheric and oceanographic research. The research activities discussed are organized into two disciplines: Global Weather/Observing Systems and Climate/Ocean-Air Interactions. The Global Weather activities are grouped in four areas: (1) Analysis and Forecast Studies, (2) Satellite Observing Systems, (3) Analysis and Model Development, (4) Atmospheric Dynamics and Diagnostic Studies. The GLAS Analysis/Forecast/Retrieval System was applied to both FGGE and post FGGE periods. The resulting analyses have already been used in a large number of theoretical studies of atmospheric dynamics, forecast impact studies and development of new or improved algorithms for the utilization of satellite data. Ocean studies have focused on the analysis of long-term global sea surface temperature data, for use in the study of the response of the atmosphere to sea surface temperature anomalies. Climate research has concentrated on the simulation of global cloudiness, and on the sensitivities of the climate to sea surface temperature and ground wetness anomalies.

  2. Mock Code: A Code Blue Scenario Requested by and Developed for Registered Nurses

    PubMed Central

    Rideout, Janice; Pritchett-Kelly, Sherry; McDonald, Melissa; Mullins-Richards, Paula; Dubrowski, Adam

    2016-01-01

    The use of simulation in medical training is quickly becoming more common, with applications in emergency, surgical, and nursing education. Recently, registered nurses working in surgical inpatient units requested a mock code simulation to practice skills, improve knowledge, and build self-confidence in a safe and controlled environment. A simulation scenario using a high-fidelity mannequin was developed and will be discussed herein. PMID:28123919

  3. An Overview of the Greyscales Lethality Assessment Methodology

    DTIC Science & Technology

    2011-01-01

    code has already been integrated into the Weapon Systems Division MECA and DUEL missile engagement simulations. It can also be integrated into...incorporated into a variety of simulations. The code has already been integrated into the Weapon Systems Division MECA and DUEL missile engagement...capable of being incorporated into a variety of simulations. The code has already been integrated into the Weapon Systems Division MECA and DUEL missile

  4. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGES

    Laney, Daniel; Langer, Steven; Weber, Christopher; ...

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  5. A Radiation Chemistry Code Based on the Green's Function of the Diffusion Equation

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Wu, Honglu

    2014-01-01

    Stochastic radiation track structure codes are of great interest for space radiation studies and hadron therapy in medicine. These codes are used for a many purposes, notably for microdosimetry and DNA damage studies. In the last two decades, they were also used with the Independent Reaction Times (IRT) method in the simulation of chemical reactions, to calculate the yield of various radiolytic species produced during the radiolysis of water and in chemical dosimeters. Recently, we have developed a Green's function based code to simulate reversible chemical reactions with an intermediate state, which yielded results in excellent agreement with those obtained by using the IRT method. This code was also used to simulate and the interaction of particles with membrane receptors. We are in the process of including this program for use with the Monte-Carlo track structure code Relativistic Ion Tracks (RITRACKS). This recent addition should greatly expand the capabilities of RITRACKS, notably to simulate DNA damage by both the direct and indirect effect.

  6. Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; Welsh, James

    2012-09-01

    Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.

  7. Study of premixing phase of steam explosion with JASMINE code in ALPHA program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu

    Premixing phase of steam explosion has been studied in ALPHA Program at Japan Atomic Energy Research Institute (JAERI). An analytical model to simulate the premixing phase, JASMINE (JAERI Simulator for Multiphase Interaction and Explosion), has been developed based on a multi-dimensional multi-phase thermal hydraulics code MISTRAL (by Fuji Research Institute Co.). The original code was extended to simulate the physics in the premixing phenomena. The first stage of the code validation was performed by analyzing two mixing experiments with solid particles and water: the isothermal experiment by Gilbertson et al. (1992) and the hot particle experiment by Angelini et al.more » (1993) (MAGICO). The code predicted reasonably well the experiments. Effectiveness of the TVD scheme employed in the code was also demonstrated.« less

  8. Neutronic calculation of fast reactors by the EUCLID/V1 integrated code

    NASA Astrophysics Data System (ADS)

    Koltashev, D. A.; Stakhanova, A. A.

    2017-01-01

    This article considers neutronic calculation of a fast-neutron lead-cooled reactor BREST-OD-300 by the EUCLID/V1 integrated code. The main goal of development and application of integrated codes is a nuclear power plant safety justification. EUCLID/V1 is integrated code designed for coupled neutronics, thermomechanical and thermohydraulic fast reactor calculations under normal and abnormal operating conditions. EUCLID/V1 code is being developed in the Nuclear Safety Institute of the Russian Academy of Sciences. The integrated code has a modular structure and consists of three main modules: thermohydraulic module HYDRA-IBRAE/LM/V1, thermomechanical module BERKUT and neutronic module DN3D. In addition, the integrated code includes databases with fuel, coolant and structural materials properties. Neutronic module DN3D provides full-scale simulation of neutronic processes in fast reactors. Heat sources distribution, control rods movement, reactivity level changes and other processes can be simulated. Neutron transport equation in multigroup diffusion approximation is solved. This paper contains some calculations implemented as a part of EUCLID/V1 code validation. A fast-neutron lead-cooled reactor BREST-OD-300 transient simulation (fuel assembly floating, decompression of passive feedback system channel) and cross-validation with MCU-FR code results are presented in this paper. The calculations demonstrate EUCLID/V1 code application for BREST-OD-300 simulating and safety justification.

  9. The next-generation ESL continuum gyrokinetic edge code

    NASA Astrophysics Data System (ADS)

    Cohen, R.; Dorr, M.; Hittinger, J.; Rognlien, T.; Collela, P.; Martin, D.

    2009-05-01

    The Edge Simulation Laboratory (ESL) project is developing continuum-based approaches to kinetic simulation of edge plasmas. A new code is being developed, based on a conservative formulation and fourth-order discretization of full-f gyrokinetic equations in parallel-velocity, magnetic-moment coordinates. The code exploits mapped multiblock grids to deal with the geometric complexities of the edge region, and utilizes a new flux limiter [P. Colella and M.D. Sekora, JCP 227, 7069 (2008)] to suppress unphysical oscillations about discontinuities while maintaining high-order accuracy elsewhere. The code is just becoming operational; we will report initial tests for neoclassical orbit calculations in closed-flux surface and limiter (closed plus open flux surfaces) geometry. It is anticipated that the algorithmic refinements in the new code will address the slow numerical instability that was observed in some long simulations with the existing TEMPEST code. We will also discuss the status and plans for physics enhancements to the new code.

  10. On the error statistics of Viterbi decoding and the performance of concatenated codes

    NASA Technical Reports Server (NTRS)

    Miller, R. L.; Deutsch, L. J.; Butman, S. A.

    1981-01-01

    Computer simulation results are presented on the performance of convolutional codes of constraint lengths 7 and 10 concatenated with the (255, 223) Reed-Solomon code (a proposed NASA standard). These results indicate that as much as 0.8 dB can be gained by concatenating this Reed-Solomon code with a (10, 1/3) convolutional code, instead of the (7, 1/2) code currently used by the DSN. A mathematical model of Viterbi decoder burst-error statistics is developed and is validated through additional computer simulations.

  11. Testability, Test Automation and Test Driven Development for the Trick Simulation Toolkit

    NASA Technical Reports Server (NTRS)

    Penn, John

    2014-01-01

    This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes the approach, and the significant benefits seen, such as fast, thorough and clear test feedback every time code is checked into the code repository. It also describes an approach that encourages development of code that is testable and adaptable.

  12. Towards a supported common NEAMS software stack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cormac Garvey

    2012-04-01

    The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less

  13. Overview of High-Fidelity Modeling Activities in the Numerical Propulsion System Simulations (NPSS) Project

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    2002-01-01

    A high-fidelity simulation of a commercial turbofan engine has been created as part of the Numerical Propulsion System Simulation Project. The high-fidelity computer simulation utilizes computer models that were developed at NASA Glenn Research Center in cooperation with turbofan engine manufacturers. The average-passage (APNASA) Navier-Stokes based viscous flow computer code is used to simulate the 3D flow in the compressors and turbines of the advanced commercial turbofan engine. The 3D National Combustion Code (NCC) is used to simulate the flow and chemistry in the advanced aircraft combustor. The APNASA turbomachinery code and the NCC combustor code exchange boundary conditions at the interface planes at the combustor inlet and exit. This computer simulation technique can evaluate engine performance at steady operating conditions. The 3D flow models provide detailed knowledge of the airflow within the fan and compressor, the high and low pressure turbines, and the flow and chemistry within the combustor. The models simulate the performance of the engine at operating conditions that include sea level takeoff and the altitude cruise condition.

  14. RAY-RAMSES: a code for ray tracing on the fly in N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barreira, Alexandre; Llinares, Claudio; Bose, Sownak

    2016-05-01

    We present a ray tracing code to compute integrated cosmological observables on the fly in AMR N-body simulations. Unlike conventional ray tracing techniques, our code takes full advantage of the time and spatial resolution attained by the N-body simulation by computing the integrals along the line of sight on a cell-by-cell basis through the AMR simulation grid. Moroever, since it runs on the fly in the N-body run, our code can produce maps of the desired observables without storing large (or any) amounts of data for post-processing. We implemented our routines in the RAMSES N-body code and tested the implementationmore » using an example of weak lensing simulation. We analyse basic statistics of lensing convergence maps and find good agreement with semi-analytical methods. The ray tracing methodology presented here can be used in several cosmological analysis such as Sunyaev-Zel'dovich and integrated Sachs-Wolfe effect studies as well as modified gravity. Our code can also be used in cross-checks of the more conventional methods, which can be important in tests of theory systematics in preparation for upcoming large scale structure surveys.« less

  15. Scalability study of parallel spatial direct numerical simulation code on IBM SP1 parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Hanebutte, Ulf R.; Joslin, Ronald D.; Zubair, Mohammad

    1994-01-01

    The implementation and the performance of a parallel spatial direct numerical simulation (PSDNS) code are reported for the IBM SP1 supercomputer. The spatially evolving disturbances that are associated with laminar-to-turbulent in three-dimensional boundary-layer flows are computed with the PS-DNS code. By remapping the distributed data structure during the course of the calculation, optimized serial library routines can be utilized that substantially increase the computational performance. Although the remapping incurs a high communication penalty, the parallel efficiency of the code remains above 40% for all performed calculations. By using appropriate compile options and optimized library routines, the serial code achieves 52-56 Mflops on a single node of the SP1 (45% of theoretical peak performance). The actual performance of the PSDNS code on the SP1 is evaluated with a 'real world' simulation that consists of 1.7 million grid points. One time step of this simulation is calculated on eight nodes of the SP1 in the same time as required by a Cray Y/MP for the same simulation. The scalability information provides estimated computational costs that match the actual costs relative to changes in the number of grid points.

  16. Psychophysical study of the visual sun location in pictures of cloudy and twilight skies inspired by Viking navigation.

    PubMed

    Barta, András; Horváth, Gábor; Meyer-Rochow, Victor Benno

    2005-06-01

    In the late 1960s it was hypothesized that Vikings had been able to navigate the open seas, even when the sun was occluded by clouds or below the sea horizon, by using the angle of polarization of skylight. To detect the direction of skylight polarization, they were thought to have made use of birefringent crystals, called "sun-stones," and a large part of the scientific community still firmly believe that Vikings were capable of polarimetric navigation. However, there are some critics who treat the usefulness of skylight polarization for orientation under partly cloudy or twilight conditions with extreme skepticism. One of their counterarguments has been the assumption that solar positions or solar azimuth directions could be estimated quite accurately by the naked eye, even if the sun was behind clouds or below the sea horizon. Thus under partly cloudy or twilight conditions there might have been no serious need for a polarimetric method to determine the position of the sun. The aim of our study was to test quantitatively the validity of this qualitative counterargument. In our psychophysical laboratory experiments, test subjects were confronted with numerous 180 degrees field-of-view color photographs of partly cloudy skies with the sun occluded by clouds or of twilight skies with the sun below the horizon. The task of the subjects was to guess the position or the azimuth direction of the invisible sun with the naked eye. We calculated means and standard deviations of the estimated solar positions and azimuth angles to characterize the accuracy of the visual sun location. Our data do not support the common belief that the invisible sun can be located quite accurately from the celestial brightness and/or color patterns under cloudy or twilight conditions. Although our results underestimate the accuracy of visual sun location by experienced Viking navigators, the mentioned counterargument cannot be taken seriously as a valid criticism of the theory of the alleged polarimetric Viking navigation. Our results, however, do not bear on the polarimetric theory itself.

  17. Psychophysical study of the visual sun location in pictures of cloudy and twilight skies inspired by Viking navigation

    NASA Astrophysics Data System (ADS)

    Barta, András; Horváth, Gábor; Benno Meyer-Rochow, Victor

    2005-06-01

    In the late 1960s it was hypothesized that Vikings had been able to navigate the open seas, even when the sun was occluded by clouds or below the sea horizon, by using the angle of polarization of skylight. To detect the direction of skylight polarization, they were thought to have made use of birefringent crystals, called "sunstones," and a large part of the scientific community still firmly believe that Vikings were capable of polarimetric navigation. However, there are some critics who treat the usefulness of skylight polarization for orientation under partly cloudy or twilight conditions with extreme skepticism. One of their counterarguments has been the assumption that solar positions or solar azimuth directions could be estimated quite accurately by the naked eye, even if the sun was behind clouds or below the sea horizon. Thus under partly cloudy or twilight conditions there might have been no serious need for a polarimetric method to determine the position of the sun. The aim of our study was to test quantitatively the validity of this qualitative counterargument. In our psychophysical laboratory experiments, test subjects were confronted with numerous 180° field-of-view color photographs of partly cloudy skies with the sun occluded by clouds or of twilight skies with the sun below the horizon. The task of the subjects was to guess the position or the azimuth direction of the invisible sun with the naked eye. We calculated means and standard deviations of the estimated solar positions and azimuth angles to characterize the accuracy of the visual sun location. Our data do not support the common belief that the invisible sun can be located quite accurately from the celestial brightness and/or color patterns under cloudy or twilight conditions. Although our results underestimate the accuracy of visual sun location by experienced Viking navigators, the mentioned counterargument cannot be taken seriously as a valid criticism of the theory of the alleged polarimetric Viking navigation. Our results, however, do not bear on the polarimetric theory itself.

  18. Test code for the assessment and improvement of Reynolds stress models

    NASA Technical Reports Server (NTRS)

    Rubesin, M. W.; Viegas, J. R.; Vandromme, D.; Minh, H. HA

    1987-01-01

    An existing two-dimensional, compressible flow, Navier-Stokes computer code, containing a full Reynolds stress turbulence model, was adapted for use as a test bed for assessing and improving turbulence models based on turbulence simulation experiments. To date, the results of using the code in comparison with simulated channel flow and over an oscillating flat plate have shown that the turbulence model used in the code needs improvement for these flows. It is also shown that direct simulation of turbulent flows over a range of Reynolds numbers are needed to guide subsequent improvement of turbulence models.

  19. Sensitivity of Downward Longwave Surface Radiation to Moisture and Cloud Changes in a High-elevation Region

    NASA Technical Reports Server (NTRS)

    Naud, Catherine M.; Chen, Yonghua; Rangwala, Imtiaz; Miller, James R.

    2013-01-01

    Several studies have suggested enhanced rates of warming in high-elevation regions since the latter half of the twentieth century. One of the potential reasons why enhanced rates of warming might occur at high elevations is the nonlinear relationship between downward longwave radiation (DLR) and specific humidity (q). Using ground-based observations at a high-elevation site in southwestern Colorado and coincident satellite-borne cloud retrievals, the sensitivity of DLR to changes in q and cloud properties is examined and quantified using a neural network method. It is also used to explore how the sensitivity of DLR to q (dDLR/dq) is affected by cloud properties. When binned by season, dDLR/dq is maximum in winter and minimum in summer for both clear and cloudy skies. However, the cloudy-sky sensitivities are smaller, primarily because (1) for both clear and cloudy skies dDLR/dq is proportional to 1/q, for q>0.5 g/kg, and (2) the seasonal values of q are on average larger in the cloudy-sky cases than in clear-sky cases. For a given value of q, dDLR/dq is slightly reduced in the presence of clouds and this reduction increases as q increases. In addition, DLR is found to be more sensitive to changes in cloud fraction when cloud fraction is large. In the limit of overcast skies, DLR sensitivity to optical thickness decreases as clouds become more opaque. These results are based on only one high-elevation site, so the conclusions here need to be tested at other high-elevation locations.

  20. Select strengths and biases of models in representing the Arctic winter boundary layer over sea ice: the Larcform 1 single column model intercomparison

    NASA Astrophysics Data System (ADS)

    Pithan, Felix; Ackerman, Andrew; Angevine, Wayne M.; Hartung, Kerstin; Ickes, Luisa; Kelley, Maxwell; Medeiros, Brian; Sandu, Irina; Steeneveld, Gert-Jan; Sterk, H. A. M.; Svensson, Gunilla; Vaillancourt, Paul A.; Zadra, Ayrton

    2016-09-01

    Weather and climate models struggle to represent lower tropospheric temperature and moisture profiles and surface fluxes in Arctic winter, partly because they lack or misrepresent physical processes that are specific to high latitudes. Observations have revealed two preferred states of the Arctic winter boundary layer. In the cloudy state, cloud liquid water limits surface radiative cooling, and temperature inversions are weak and elevated. In the radiatively clear state, strong surface radiative cooling leads to the build-up of surface-based temperature inversions. Many large-scale models lack the cloudy state, and some substantially underestimate inversion strength in the clear state. Here, the transformation from a moist to a cold dry air mass is modeled using an idealized Lagrangian perspective. The trajectory includes both boundary layer states, and the single-column experiment is the first Lagrangian Arctic air formation experiment (Larcform 1) organized within GEWEX GASS (Global atmospheric system studies). The intercomparison reproduces the typical biases of large-scale models: some models lack the cloudy state of the boundary layer due to the representation of mixed-phase microphysics or to the interaction between micro- and macrophysics. In some models, high emissivities of ice clouds or the lack of an insulating snow layer prevent the build-up of surface-based inversions in the radiatively clear state. Models substantially disagree on the amount of cloud liquid water in the cloudy state and on turbulent heat fluxes under clear skies. Observations of air mass transformations including both boundary layer states would allow for a tighter constraint of model behavior.

  1. Automatic Mosaicking of Satellite Imagery Considering the Clouds

    NASA Astrophysics Data System (ADS)

    Kang, Yifei; Pan, Li; Chen, Qi; Zhang, Tong; Zhang, Shasha; Liu, Zhang

    2016-06-01

    With the rapid development of high resolution remote sensing for earth observation technology, satellite imagery is widely used in the fields of resource investigation, environment protection, and agricultural research. Image mosaicking is an important part of satellite imagery production. However, the existence of clouds leads to lots of disadvantages for automatic image mosaicking, mainly in two aspects: 1) Image blurring may be caused during the process of image dodging, 2) Cloudy areas may be passed through by automatically generated seamlines. To address these problems, an automatic mosaicking method is proposed for cloudy satellite imagery in this paper. Firstly, modified Otsu thresholding and morphological processing are employed to extract cloudy areas and obtain the percentage of cloud cover. Then, cloud detection results are used to optimize the process of dodging and mosaicking. Thus, the mosaic image can be combined with more clear-sky areas instead of cloudy areas. Besides, clear-sky areas will be clear and distortionless. The Chinese GF-1 wide-field-of-view orthoimages are employed as experimental data. The performance of the proposed approach is evaluated in four aspects: the effect of cloud detection, the sharpness of clear-sky areas, the rationality of seamlines and efficiency. The evaluation results demonstrated that the mosaic image obtained by our method has fewer clouds, better internal color consistency and better visual clarity compared with that obtained by traditional method. The time consumed by the proposed method for 17 scenes of GF-1 orthoimages is within 4 hours on a desktop computer. The efficiency can meet the general production requirements for massive satellite imagery.

  2. A methodology to select galaxies just after the quenching of star formation

    NASA Astrophysics Data System (ADS)

    Citro, Annalisa; Pozzetti, Lucia; Quai, Salvatore; Moresco, Michele; Vallini, Livia; Cimatti, Andrea

    2017-08-01

    We propose a new methodology aimed at finding star-forming galaxies in the phase which immediately follows the star-formation (SF) quenching, based on the use of high- to low-ionization emission line ratios. These ratios rapidly disappear after the SF halt, due to the softening of the UV ionizing radiation. We focus on [O III] λ5007/Hα and [Ne III] λ3869/[O II] λ3727, studying them with simulations obtained with the cloudy photoionization code. If a sharp quenching is assumed, we find that the two ratios are very sensitive tracers as they drop by a factor of ˜ 10 within ˜10 Myr from the interruption of the SF; instead, if a smoother and slower SF decline is assumed (I.e. an exponentially declining SF history with e-folding time τ = 200 Myr), they decrease by a factor of ˜2 within ˜80 Myr. We mitigate the ionization-metallicity degeneracy affecting our methodology using pairs of emission line ratios separately related to metallicity and ionization, adopting the [N II] λ6584/[O II] λ3727 ratio as metallicity diagnostic. Using a Sloan Digital Sky Survey galaxy sample, we identify 10 examples among the most extreme quenching candidates within the [O III] λ5007/Hα versus [N II] λ6584/[O II] λ3727 plane, characterized by low [O III] λ5007/Hα, faint [Ne III] λ3869, and by blue dust-corrected spectra and (u - r) colours, as expected if the SF quenching has occurred in the very recent past. Our results also suggest that the observed fractions of quenching candidates can be used to constrain the quenching mechanism at work and its time-scales.

  3. Thermal Disk Winds in X-Ray Binaries: Realistic Heating and Cooling Rates Give Rise to Slow, but Massive, Outflows

    NASA Astrophysics Data System (ADS)

    Higginbottom, N.; Proga, D.; Knigge, C.; Long, K. S.

    2017-02-01

    A number of X-ray binaries exhibit clear evidence for the presence of disk winds in the high/soft state. A promising driving mechanism for these outflows is mass loss driven by the thermal expansion of X-ray heated material in the outer disk atmosphere. Higginbottom & Proga recently demonstrated that the properties of thermally driven winds depend critically on the shape of the thermal equilibrium curve, since this determines the thermal stability of the irradiated material. For a given spectral energy distribution, the thermal equilibrium curve depends on an exact balance between the various heating and cooling mechanisms at work. Most previous work on thermally driven disk winds relied on an analytical approximation to these rates. Here, we use the photoionization code cloudy to generate realistic heating and cooling rates which we then use in a 2.5D hydrodynamic model computed in ZEUS to simulate thermal winds in a typical black hole X-ray binary. We find that these heating and cooling rates produce a significantly more complex thermal equilibrium curve, with dramatically different stability properties. The resulting flow, calculated in the optically thin limit, is qualitatively different from flows calculated using approximate analytical rates. Specifically, our thermal disk wind is much denser and slower, with a mass-loss rate that is a factor of two higher and characteristic velocities that are a factor of three lower. The low velocity of the flow—{v}\\max ≃ 200 km s-1—may be difficult to reconcile with observations. However, the high mass-loss rate—15 × the accretion rate—is promising, since it has the potential to destabilize the disk. Thermally driven disk winds may therefore provide a mechanism for state changes.

  4. Benchmark Simulations of the Thermal-Hydraulic Responses during EBR-II Inherent Safety Tests using SAM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Rui; Sumner, Tyler S.

    2016-04-17

    An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less

  5. A CellML simulation compiler and code generator using ODE solving schemes

    PubMed Central

    2012-01-01

    Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065

  6. A methodology for the rigorous verification of plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Riva, Fabio

    2016-10-01

    The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.

  7. Airborne observations and simulations of three-dimensional radiative interactions between Arctic boundary layer clouds and ice floes

    NASA Astrophysics Data System (ADS)

    Schäfer, M.; Bierwirth, E.; Ehrlich, A.; Jäkel, E.; Wendisch, M.

    2015-07-01

    Based on airborne spectral imaging observations, three-dimensional (3-D) radiative effects between Arctic boundary layer clouds and highly variable Arctic surfaces were identified and quantified. A method is presented to discriminate between sea ice and open water under cloudy conditions based on airborne nadir reflectivity γλ measurements in the visible spectral range. In cloudy cases the transition of γλ from open water to sea ice is not instantaneous but horizontally smoothed. In general, clouds reduce γλ above bright surfaces in the vicinity of open water, while γλ above open sea is enhanced. With the help of observations and 3-D radiative transfer simulations, this effect was quantified to range between 0 and 2200 m distance to the sea ice edge (for a dark-ocean albedo of αwater = 0.042 and a sea-ice albedo of αice = 0.91 at 645 nm wavelength). The affected distance Δ L was found to depend on both cloud and sea ice properties. For a low-level cloud at 0-200 m altitude, as observed during the Arctic field campaign VERtical Distribution of Ice in Arctic clouds (VERDI) in 2012, an increase in the cloud optical thickness τ from 1 to 10 leads to a decrease in Δ L from 600 to 250 m. An increase in the cloud base altitude or cloud geometrical thickness results in an increase in Δ L; for τ = 1/10 Δ L = 2200 m/1250 m in case of a cloud at 500-1000 m altitude. To quantify the effect for different shapes and sizes of ice floes, radiative transfer simulations were performed with various albedo fields (infinitely long straight ice edge, circular ice floes, squares, realistic ice floe field). The simulations show that Δ L increases with increasing radius of the ice floe and reaches maximum values for ice floes with radii larger than 6 km (500-1000 m cloud altitude), which matches the results found for an infinitely long, straight ice edge. Furthermore, the influence of these 3-D radiative effects on the retrieved cloud optical properties was investigated. The enhanced brightness of a dark pixel next to an ice edge results in uncertainties of up to 90 and 30 % in retrievals of τ and effective radius reff, respectively. With the help of Δ L, an estimate of the distance to the ice edge is given, where the retrieval uncertainties due to 3-D radiative effects are negligible.

  8. TWANG-PIC, a novel gyro-averaged one-dimensional particle-in-cell code for interpretation of gyrotron experiments

    NASA Astrophysics Data System (ADS)

    Braunmueller, F.; Tran, T. M.; Vuillemin, Q.; Alberti, S.; Genoud, J.; Hogge, J.-Ph.; Tran, M. Q.

    2015-06-01

    A new gyrotron simulation code for simulating the beam-wave interaction using a monomode time-dependent self-consistent model is presented. The new code TWANG-PIC is derived from the trajectory-based code TWANG by describing the electron motion in a gyro-averaged one-dimensional Particle-In-Cell (PIC) approach. In comparison to common PIC-codes, it is distinguished by its computation speed, which makes its use in parameter scans and in experiment interpretation possible. A benchmark of the new code is presented as well as a comparative study between the two codes. This study shows that the inclusion of a time-dependence in the electron equations, as it is the case in the PIC-approach, is mandatory for simulating any kind of non-stationary oscillations in gyrotrons. Finally, the new code is compared with experimental results and some implications of the violated model assumptions in the TWANG code are disclosed for a gyrotron experiment in which non-stationary regimes have been observed and for a critical case that is of interest in high power gyrotron development.

  9. TWANG-PIC, a novel gyro-averaged one-dimensional particle-in-cell code for interpretation of gyrotron experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braunmueller, F., E-mail: falk.braunmueller@epfl.ch; Tran, T. M.; Alberti, S.

    A new gyrotron simulation code for simulating the beam-wave interaction using a monomode time-dependent self-consistent model is presented. The new code TWANG-PIC is derived from the trajectory-based code TWANG by describing the electron motion in a gyro-averaged one-dimensional Particle-In-Cell (PIC) approach. In comparison to common PIC-codes, it is distinguished by its computation speed, which makes its use in parameter scans and in experiment interpretation possible. A benchmark of the new code is presented as well as a comparative study between the two codes. This study shows that the inclusion of a time-dependence in the electron equations, as it is themore » case in the PIC-approach, is mandatory for simulating any kind of non-stationary oscillations in gyrotrons. Finally, the new code is compared with experimental results and some implications of the violated model assumptions in the TWANG code are disclosed for a gyrotron experiment in which non-stationary regimes have been observed and for a critical case that is of interest in high power gyrotron development.« less

  10. On the fine structure of meteoritical taenite/tetrataenite and its interpretation

    NASA Astrophysics Data System (ADS)

    Albertsen, J. F.; Nielsen, H. P.; Buchwald, V. F.

    1983-04-01

    TEM, electron microprobe, and Moessbauer spectroscopy are used in investigating taenite fields from several meteorites. A delicate pattern of antiphase domains is revealed in the tetrataenite, as is the presence of low-Ni taenite at the antiphase boundaries in what was hitherto believed to be pure tetrataenite. The observations suggest that the 'cloudy taenite' (cloudy zone II) was formed by a magnetically induced spinodal decomposition of the metastable taenite during slow cooling below 400 C. It is thought likely that decompositin occurs when the Curie temperature of the alloy changes rapidly with composition, as it does in f.c.c. iron-nickel alloys containing approximately 28-43 percent Ni (wt pct). The large contribution to Gibbs free energy from magnetic ordering leads to inflections in the Gibbs free energy curve, making the alloy unstable with regard to decomposition, in this case into a magnetically and atomically ordered Ni-rich alloy plus a magnetically and atomically disordered Ni-poor alloy. The model accounts well for the structure and composition of the two phases in the cloudy taenite.

  11. Effect of mash maceration on the polyphenolic content and visual quality attributes of cloudy apple juice.

    PubMed

    Mihalev, Kiril; Schieber, Andreas; Mollov, Plamen; Carle, Reinhold

    2004-12-01

    The effects of enzymatic mash treatments on yield, turbidity, color, and polyphenolic content of cloudy apple juice were studied. Using HPLC-ESI-MS, cryptochlorogenic acid was identified in cv. Brettacher cloudy apple juice for the first time. Commercial pectolytic enzyme preparations with different levels of secondary protease activity were tested under both oxidative and nonoxidative conditions. Without the addition of ascorbic acid, oxidation substantially decreased chlorogenic acid, epicatechin, and procyanidin B2 contents due to enzymatic browning. The content of chlorogenic acid as the major polyphenolic compound was also influenced by the composition of pectolytic enzyme preparations because the presence of secondary protease activity resulted in a rise of chlorogenic acid. The latter effect was probably due to the inhibited protein-polyphenol interactions, which prevented binding of polyphenolic compounds to the matrix, thus increasing their antioxidative potential. The results obtained clearly demonstrate the advantage of the nonoxidative mash maceration for the production of cloud-stable apple juice with a high polyphenolic content, particularly in a premature processing campaign.

  12. The Tropical Western Hemisphere Warm Pool

    NASA Astrophysics Data System (ADS)

    Wang, Chunzai; Enfield, David B.

    The Western Hemisphere warm pool (WHWP) of water warmer than 28.5°C extends from the eastern North Pacific to the Gulf of Mexico and the Caribbean, and at its peak, overlaps with the tropical North Atlantic. It has a large seasonal cycle and its interannual fluctuations of area and intensity are significant. Surface heat fluxes warm the WHWP through the boreal spring to an annual maximum of SST and areal extent in the late summer/early fall, associated with eastern North Pacific and Atlantic hurricane activities and rainfall from northern South America to the southern tier of the United States. SST and area anomalies occur at high temperatures where small changes can have a large impact on tropical convection. Observations suggest that a positive ocean-atmosphere feedback operating through longwave radiation and associated cloudiness is responsible for the WHWP SST anomalies. Associated with an increase in SST anomalies is a decrease in atmospheric sea level pressure anomalies and an anomalous increase in atmospheric convection and cloudiness. The increase in convective activity and cloudiness results in less longwave radiation loss from the surface, which then reinforces SST anomalies.

  13. Cyclotron resonant scattering feature simulations. II. Description of the CRSF simulation process

    NASA Astrophysics Data System (ADS)

    Schwarm, F.-W.; Ballhausen, R.; Falkner, S.; Schönherr, G.; Pottschmidt, K.; Wolff, M. T.; Becker, P. A.; Fürst, F.; Marcu-Cheatham, D. M.; Hemphill, P. B.; Sokolova-Lapa, E.; Dauser, T.; Klochkov, D.; Ferrigno, C.; Wilms, J.

    2017-05-01

    Context. Cyclotron resonant scattering features (CRSFs) are formed by scattering of X-ray photons off quantized plasma electrons in the strong magnetic field (of the order 1012 G) close to the surface of an accreting X-ray pulsar. Due to the complex scattering cross-sections, the line profiles of CRSFs cannot be described by an analytic expression. Numerical methods, such as Monte Carlo (MC) simulations of the scattering processes, are required in order to predict precise line shapes for a given physical setup, which can be compared to observations to gain information about the underlying physics in these systems. Aims: A versatile simulation code is needed for the generation of synthetic cyclotron lines. Sophisticated geometries should be investigatable by making their simulation possible for the first time. Methods: The simulation utilizes the mean free path tables described in the first paper of this series for the fast interpolation of propagation lengths. The code is parallelized to make the very time-consuming simulations possible on convenient time scales. Furthermore, it can generate responses to monoenergetic photon injections, producing Green's functions, which can be used later to generate spectra for arbitrary continua. Results: We develop a new simulation code to generate synthetic cyclotron lines for complex scenarios, allowing for unprecedented physical interpretation of the observed data. An associated XSPEC model implementation is used to fit synthetic line profiles to NuSTAR data of Cep X-4. The code has been developed with the main goal of overcoming previous geometrical constraints in MC simulations of CRSFs. By applying this code also to more simple, classic geometries used in previous works, we furthermore address issues of code verification and cross-comparison of various models. The XSPEC model and the Green's function tables are available online (see link in footnote, page 1).

  14. Video Monitoring a Simulation-Based Quality Improvement Program in Bihar, India.

    PubMed

    Dyer, Jessica; Spindler, Hilary; Christmas, Amelia; Shah, Malay Bharat; Morgan, Melissa; Cohen, Susanna R; Sterne, Jason; Mahapatra, Tanmay; Walker, Dilys

    2018-04-01

    Simulation-based training has become an accepted clinical training andragogy in high-resource settings with its use increasing in low-resource settings. Video recordings of simulated scenarios are commonly used by facilitators. Beyond using the videos during debrief sessions, researchers can also analyze the simulation videos to quantify technical and nontechnical skills during simulated scenarios over time. Little is known about the feasibility and use of large-scale systems to video record and analyze simulation and debriefing data for monitoring and evaluation in low-resource settings. This manuscript describes the process of designing and implementing a large-scale video monitoring system. Mentees and Mentors were consented and all simulations and debriefs conducted at 320 Primary Health Centers (PHCs) were video recorded. The system design, number of video recordings, and inter-rater reliability of the coded videos were assessed. The final dataset included a total of 11,278 videos. Overall, a total of 2,124 simulation videos were coded and 183 (12%) were blindly double-coded. For the double-coded sample, the average inter-rater reliability (IRR) scores were 80% for nontechnical skills, and 94% for clinical technical skills. Among 4,450 long debrief videos received, 216 were selected for coding and all were double-coded. Data quality of simulation videos was found to be very good in terms of recorded instances of "unable to see" and "unable to hear" in Phases 1 and 2. This study demonstrates that video monitoring systems can be effectively implemented at scale in resource limited settings. Further, video monitoring systems can play several vital roles within program implementation, including monitoring and evaluation, provision of actionable feedback to program implementers, and assurance of program fidelity.

  15. Implementation and evaluation of a simulation curriculum for paediatric residency programs including just-in-time in situ mock codes

    PubMed Central

    Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam

    2012-01-01

    OBJECTIVE: To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. DESIGN: Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. SETTING: Paediatric residency program at BC Children’s Hospital, Vancouver, British Columbia. INTERVENTIONS: The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. RESULTS: A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. CONCLUSIONS: A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes. PMID:23372405

  16. Caius: Synthetic Observations Using a Robust End-to-End Radiative Transfer Pipeline

    NASA Astrophysics Data System (ADS)

    Simeon Barrow, Kirk Stuart; Wise, John H.; O'Shea, Brian; Norman, Michael L.; Xu, Hao

    2018-01-01

    We present synthetic observations for the first generations of galaxies in the Universe and make predictions for future deep field observations for redshifts greater than 6. Due to the strong impact of nebular emission lines and the relatively compact scale of HII regions, high resolution cosmological simulations and a robust suite of analysis tools are required to properly simulate spectra. We created a software pipeline consisting of FSPS, Yggdrasil, Hyperion, Cloudy and our own tools to generate synthetic IR observations from a fully three-dimensional arrangement of gas, dust, and stars. Our prescription allows us to include emission lines for a complete chemical network and tackle the effect of dust extinction and scattering in the various lines of sight. We provide spectra, 2-D binned photon imagery for both HST and JWST IR filters, luminosity relationships, and emission line strengths for a large sample of high redshift galaxies in the Renaissance Simulations (Xu et al. 2013). We also pay special attention to contributions from Population III stars and high-mass X-ray binaries and explore a direct-collapse black hole simulation (Aykutalp et al. 2014). Our resulting synthetic spectra show high variability between galactic halos with a strong dependence on stellar mass, viewing angle, metallicity, gas mass fraction, and formation history.

  17. Developing Present-day Proxy Cases Based on NARVAL Data for Investigating Low Level Cloud Responses to Future Climate Change.

    NASA Astrophysics Data System (ADS)

    Reilly, Stephanie

    2017-04-01

    The energy budget of the entire global climate is significantly influenced by the presence of boundary layer clouds. The main aim of the High Definition Clouds and Precipitation for Advancing Climate Prediction (HD(CP)2) project is to improve climate model predictions by means of process studies of clouds and precipitation. This study makes use of observed elevated moisture layers as a proxy of future changes in tropospheric humidity. The associated impact on radiative transfer triggers fast responses in boundary layer clouds, providing a framework for investigating this phenomenon. The investigation will be carried out using data gathered during the Next-generation Aircraft Remote-sensing for VALidation (NARVAL) South campaigns. Observational data will be combined with ECMWF reanalysis data to derive the large scale forcings for the Large Eddy Simulations (LES). Simulations will be generated for a range of elevated moisture layers, spanning a multi-dimensional phase space in depth, amplitude, elevation, and cloudiness. The NARVAL locations will function as anchor-points. The results of the large eddy simulations and the observations will be studied and compared in an attempt to determine how simulated boundary layer clouds react to changes in radiative transfer from the free troposphere. Preliminary LES results will be presented and discussed.

  18. Explicit simulation of ice particle habits in a Numerical Weather Prediction Model

    NASA Astrophysics Data System (ADS)

    Hashino, Tempei

    2007-05-01

    This study developed a scheme for explicit simulation of ice particle habits in Numerical Weather Prediction (NWP) Models. The scheme is called Spectral Ice Habit Prediction System (SHIPS), and the goal is to retain growth history of ice particles in the Eulerian dynamics framework. It diagnoses characteristics of ice particles based on a series of particle property variables (PPVs) that reflect history of microphysieal processes and the transport between mass bins and air parcels in space. Therefore, categorization of ice particles typically used in bulk microphysical parameterization and traditional bin models is not necessary, so that errors that stem from the categorization can be avoided. SHIPS predicts polycrystals as well as hexagonal monocrystals based on empirically derived habit frequency and growth rate, and simulates the habit-dependent aggregation and riming processes by use of the stochastic collection equation with predicted PPVs. Idealized two dimensional simulations were performed with SHIPS in a NWP model. The predicted spatial distribution of ice particle habits and types, and evolution of particle size distributions showed good quantitative agreement with observation This comprehensive model of ice particle properties, distributions, and evolution in clouds can be used to better understand problems facing wide range of research disciplines, including microphysics processes, radiative transfer in a cloudy atmosphere, data assimilation, and weather modification.

  19. Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.

    2007-03-01

    In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.

  20. Simulation of Combustion Systems with Realistic g-jitter

    NASA Technical Reports Server (NTRS)

    Mell, William E.; McGrattan, Kevin B.; Baum, Howard R.

    2003-01-01

    In this project a transient, fully three-dimensional computer simulation code was developed to simulate the effects of realistic g-jitter on a number of combustion systems. The simulation code is capable of simulating flame spread on a solid and nonpremixed or premixed gaseous combustion in nonturbulent flow with simple combustion models. Simple combustion models were used to preserve computational efficiency since this is meant to be an engineering code. Also, the use of sophisticated turbulence models was not pursued (a simple Smagorinsky type model can be implemented if deemed appropriate) because if flow velocities are large enough for turbulence to develop in a reduced gravity combustion scenario it is unlikely that g-jitter disturbances (in NASA's reduced gravity facilities) will play an important role in the flame dynamics. Acceleration disturbances of realistic orientation, magnitude, and time dependence can be easily included in the simulation. The simulation algorithm was based on techniques used in an existing large eddy simulation code which has successfully simulated fire dynamics in complex domains. A series of simulations with measured and predicted acceleration disturbances on the International Space Station (ISS) are presented. The results of this series of simulations suggested a passive isolation system and appropriate scheduling of crew activity would provide a sufficiently "quiet" acceleration environment for spherical diffusion flames.

  1. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    NASA Astrophysics Data System (ADS)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  2. 3D Multispecies Nonlinear Perturbative Particle Simulation of Intense Nonneutral Particle Beams (Research supported by the Department of Energy and the Short Pulse Spallation Source Project and LANSCE Division of LANL.)

    NASA Astrophysics Data System (ADS)

    Qin, Hong; Davidson, Ronald C.; Lee, W. Wei-Li

    1999-11-01

    The Beam Equilibrium Stability and Transport (BEST) code, a 3D multispecies nonlinear perturbative particle simulation code, has been developed to study collective effects in intense charged particle beams described self-consistently by the Vlasov-Maxwell equations. A Darwin model is adopted for transverse electromagnetic effects. As a 3D multispecies perturbative particle simulation code, it provides several unique capabilities. Since the simulation particles are used to simulate only the perturbed distribution function and self-fields, the simulation noise is reduced significantly. The perturbative approach also enables the code to investigate different physics effects separately, as well as simultaneously. The code can be easily switched between linear and nonlinear operation, and used to study both linear stability properties and nonlinear beam dynamics. These features, combined with 3D and multispecies capabilities, provides an effective tool to investigate the electron-ion two-stream instability, periodically focused solutions in alternating focusing fields, and many other important problems in nonlinear beam dynamics and accelerator physics. Applications to the two-stream instability are presented.

  3. Three-dimensional simulation of triode-type MIG for 1 MW, 120 GHz gyrotron for ECRH applications

    NASA Astrophysics Data System (ADS)

    Singh, Udaybir; Kumar, Nitin; Kumar, Narendra; Kumar, Anil; Sinha, A. K.

    2012-01-01

    In this paper, the three-dimensional simulation of triode-type magnetron injection gun (MIG) for 120 GHz, 1 MW gyrotron is presented. The operating voltages of the modulating anode and the accelerating anode are 57 kV and 80 kV respectively. The high order TE 22,6 mode is selected as the operating mode and the electron beam is launched at the first radial maxima for the fundamental beam-mode operation. The initial design is obtained by using the in-house developed code MIGSYN. The numerical simulation is performed by using the commercially available code CST-Particle Studio (PS). The simulated results of MIG obtained by using CST-PS are validated with other simulation codes EGUN and TRAK, respectively. The results on the design output parameters obtained by using these three codes are found to be in close agreement.

  4. A Novel Technique for Running the NASA Legacy Code LAPIN Synchronously With Simulations Developed Using Simulink

    NASA Technical Reports Server (NTRS)

    Vrnak, Daniel R.; Stueber, Thomas J.; Le, Dzu K.

    2012-01-01

    This report presents a method for running a dynamic legacy inlet simulation in concert with another dynamic simulation that uses a graphical interface. The legacy code, NASA's LArge Perturbation INlet (LAPIN) model, was coded using the FORTRAN 77 (The Portland Group, Lake Oswego, OR) programming language to run in a command shell similar to other applications that used the Microsoft Disk Operating System (MS-DOS) (Microsoft Corporation, Redmond, WA). Simulink (MathWorks, Natick, MA) is a dynamic simulation that runs on a modern graphical operating system. The product of this work has both simulations, LAPIN and Simulink, running synchronously on the same computer with periodic data exchanges. Implementing the method described in this paper avoided extensive changes to the legacy code and preserved its basic operating procedure. This paper presents a novel method that promotes inter-task data communication between the synchronously running processes.

  5. Code modernization and modularization of APEX and SWAT watershed simulation models

    USDA-ARS?s Scientific Manuscript database

    SWAT (Soil and Water Assessment Tool) and APEX (Agricultural Policy / Environmental eXtender) are respectively large and small watershed simulation models derived from EPIC Environmental Policy Integrated Climate), a field-scale agroecology simulation model. All three models are coded in FORTRAN an...

  6. Verifying a computational method for predicting extreme ground motion

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.

    2011-01-01

    In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.

  7. Three Dimensional Hybrid Simulations of Super-Alfvénic Laser Ablation Experiments in the Large Plasma Device

    NASA Astrophysics Data System (ADS)

    Clark, Stephen; Winske, Dan; Schaeffer, Derek; Everson, Erik; Bondarenko, Anton; Constantin, Carmen; Niemann, Christoph

    2014-10-01

    We present 3D hybrid simulations of laser produced expanding debris clouds propagating though a magnetized ambient plasma in the context of magnetized collisionless shocks. New results from the 3D code are compared to previously obtained simulation results using a 2D hybrid code. The 3D code is an extension of a previously developed 2D code developed at Los Alamos National Laboratory. It has been parallelized and ported to execute on a cluster environment. The new simulations are used to verify scaling relationships, such as shock onset time and coupling parameter (Rm /ρd), developed via 2D simulations. Previous 2D results focus primarily on laboratory shock formation relevant to experiments being performed on the Large Plasma Device, where the shock propagates across the magnetic field. The new 3D simulations show wave structure and dynamics oblique to the magnetic field that introduce new physics to be considered in future experiments.

  8. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delchini, Marc-Olivier; Popov, Emilian L.; Pointer, William David

    This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations.

  9. Simulation of Weld Mechanical Behavior to Include Welding-Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes

    DTIC Science & Technology

    2015-11-01

    Memorandum Simulation of Weld Mechanical Behavior to Include Welding -Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes... Weld Mechanical Behavior to Include Welding -Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes by Charles R. Fisher...TYPE Technical Report 3. DATES COVERED (From - To) Dec 2013 – July 2015 4. TITLE AND SUBTITLE Simulation of Weld Mechanical Behavior to Include

  10. Simulation of Weld Mechanical Behavior to Include Welding Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes

    DTIC Science & Technology

    2015-11-01

    Memorandum Simulation of Weld Mechanical Behavior to Include Welding -Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes... Weld Mechanical Behavior to Include Welding -Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes by Charles R. Fisher...TYPE Technical Report 3. DATES COVERED (From - To) Dec 2013 – July 2015 4. TITLE AND SUBTITLE Simulation of Weld Mechanical Behavior to Include

  11. Creating and Testing Simulation Software

    NASA Technical Reports Server (NTRS)

    Heinich, Christina M.

    2013-01-01

    The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.

  12. High dynamic range coding imaging system

    NASA Astrophysics Data System (ADS)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  13. The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava

    2016-08-01

    This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.

  14. Coupled Kinetic-MHD Simulations of Divertor Heat Load with ELM Perturbations

    NASA Astrophysics Data System (ADS)

    Cummings, Julian; Chang, C. S.; Park, Gunyoung; Sugiyama, Linda; Pankin, Alexei; Klasky, Scott; Podhorszki, Norbert; Docan, Ciprian; Parashar, Manish

    2010-11-01

    The effect of Type-I ELM activity on divertor plate heat load is a key component of the DOE OFES Joint Research Target milestones for this year. In this talk, we present simulations of kinetic edge physics, ELM activity, and the associated divertor heat loads in which we couple the discrete guiding-center neoclassical transport code XGC0 with the nonlinear extended MHD code M3D using the End-to-end Framework for Fusion Integrated Simulations, or EFFIS. In these coupled simulations, the kinetic code and the MHD code run concurrently on the same massively parallel platform and periodic data exchanges are performed using a memory-to-memory coupling technology provided by EFFIS. The M3D code models the fast ELM event and sends frequent updates of the magnetic field perturbations and electrostatic potential to XGC0, which in turn tracks particle dynamics under the influence of these perturbations and collects divertor particle and energy flux statistics. We describe here how EFFIS technologies facilitate these coupled simulations and discuss results for DIII-D, NSTX and Alcator C-Mod tokamak discharges.

  15. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  16. Umbra (core)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradley, Jon David; Oppel III, Fred J.; Hart, Brian E.

    Umbra is a flexible simulation framework for complex systems that can be used by itself for modeling, simulation, and analysis, or to create specific applications. It has been applied to many operations, primarily dealing with robotics and system of system simulations. This version, from 4.8 to 4.8.3b, incorporates bug fixes, refactored code, and new managed C++ wrapper code that can be used to bridge new applications written in C# to the C++ libraries. The new managed C++ wrapper code includes (project/directories) BasicSimulation, CSharpUmbraInterpreter, LogFileView, UmbraAboutBox, UmbraControls, UmbraMonitor and UmbraWrapper.

  17. GAPD: a GPU-accelerated atom-based polychromatic diffraction simulation code.

    PubMed

    E, J C; Wang, L; Chen, S; Zhang, Y Y; Luo, S N

    2018-03-01

    GAPD, a graphics-processing-unit (GPU)-accelerated atom-based polychromatic diffraction simulation code for direct, kinematics-based, simulations of X-ray/electron diffraction of large-scale atomic systems with mono-/polychromatic beams and arbitrary plane detector geometries, is presented. This code implements GPU parallel computation via both real- and reciprocal-space decompositions. With GAPD, direct simulations are performed of the reciprocal lattice node of ultralarge systems (∼5 billion atoms) and diffraction patterns of single-crystal and polycrystalline configurations with mono- and polychromatic X-ray beams (including synchrotron undulator sources), and validation, benchmark and application cases are presented.

  18. Broadening of cloud droplet spectra through turbulent entrainment and eddy hopping

    NASA Astrophysics Data System (ADS)

    Abade, Gustavo; Grabowski, Wojciech; Pawlowska, Hanna

    2017-11-01

    This work discusses the effect of cloud turbulence and turbulent entrainment on the evolution of the cloud droplet-size spectrum. We simulate an ensemble of idealized turbulent cloud parcels that are subject to entrainment events, modeled as a random Poisson process. Entrainment events, subsequent turbulent mixing inside the parcel, supersaturation fluctuations, and the resulting stochastic droplet growth by condensation are simulated using a Monte Carlo scheme. Quantities characterizing the turbulence intensity, entrainment rate and the mean fraction of environmental air entrained in an event are specified as external parameters. Cloud microphysics is described by applying Lagrangian particles, the so-called superdroplets. They are either unactivated cloud condensation nuclei (CCN) or cloud droplets that form from activated CCN. The model accounts for the transport of environmental CCN into the cloud by the entraining eddies at the cloud edge. Turbulent mixing of the entrained dry air with cloudy air is described using a linear model. We show that turbulence plays an important role in aiding entrained CCN to activate, providing a source of small cloud droplets and thus broadening the droplet size distribution. Further simulation results will be reported at the meeting.

  19. Role of aerosols on the Indian Summer Monsoon variability, as simulated by state-of-the-art global climate models

    NASA Astrophysics Data System (ADS)

    Cagnazzo, Chiara; Biondi, Riccardo; D'Errico, Miriam; Cherchi, Annalisa; Fierli, Federico; Lau, William K. M.

    2016-04-01

    Recent observational and modeling analyses have explored the interaction between aerosols and the Indian summer monsoon precipitation on seasonal-to-interannual time scales. By using global scale climate model simulations, we show that when increased aerosol loading is found on the Himalayas slopes in the premonsoon period (April-May), intensification of early monsoon rainfall over India and increased low-level westerly flow follow, in agreement with the elevated-heat-pump (EHP) mechanism. The increase in rainfall during the early monsoon season has a cooling effect on the land surface that may also be amplified through solar dimming (SD) by more cloudiness and aerosol loading with subsequent reduction in monsoon rainfall over India. We extend this analyses to a subset of CMIP5 climate model simulations. Our results suggest that 1) absorbing aerosols, by influencing the seasonal variability of the Indian summer monsoon with the discussed time-lag, may act as a source of predictability for the Indian Summer Monsoon and 2) if the EHP and SD effects are operating also in a number of state-of-the-art climate models, their inclusion could potentially improve seasonal forecasts.

  20. Acceleration of tropical cyclogenesis by self-aggregation feedbacks

    NASA Astrophysics Data System (ADS)

    Muller, Caroline J.; Romps, David M.

    2018-03-01

    Idealized simulations of tropical moist convection have revealed that clouds can spontaneously clump together in a process called self-aggregation. This results in a state where a moist cloudy region with intense deep convection is surrounded by extremely dry subsiding air devoid of deep convection. Because of the idealized settings of the simulations where it was discovered, the relevance of self-aggregation to the real world is still debated. Here, we show that self-aggregation feedbacks play a leading-order role in the spontaneous genesis of tropical cyclones in cloud-resolving simulations. Those feedbacks accelerate the cyclogenesis process by a factor of 2, and the feedbacks contributing to the cyclone formation show qualitative and quantitative agreement with the self-aggregation process. Once the cyclone is formed, wind-induced surface heat exchange (WISHE) effects dominate, although we find that self-aggregation feedbacks have a small but nonnegligible contribution to the maintenance of the mature cyclone. Our results suggest that self-aggregation, and the framework developed for its study, can help shed more light into the physical processes leading to cyclogenesis and cyclone intensification. In particular, our results point out the importance of the longwave radiative cooling outside the cyclone.

  1. Simple Models of the Spatial Distribution of Cloud Radiative Properties for Remote Sensing Studies

    NASA Technical Reports Server (NTRS)

    2004-01-01

    This project aimed to assess the degree to which estimates of three-dimensional cloud structure can be inferred from a time series of profiles obtained at a point. The work was motivated by the desire to understand the extent to which high-frequency profiles of the atmosphere (e.g. ARM data streams) can be used to assess the magnitude of non-plane parallel transfer of radiation in thc atmosphere. We accomplished this by performing an observing system simulation using a large-eddy simulation and a Monte Carlo radiative transfer model. We define the 3D effect as the part of the radiative transfer that isn't captured by one-dimensional radiative transfer calculations. We assess the magnitude of the 3D effect in small cumulus clouds by using a fine-scale cloud model to simulate many hours of cloudiness over a continental site. We then use a Monte Carlo radiative transfer model to compute the broadband shortwave fluxes at the surface twice, once using the complete three-dimensional radiative transfer F(sup 3D), and once using the ICA F (sup ICA); the difference between them is the 3D effect given.

  2. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code.

    PubMed

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  3. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    PubMed Central

    Kunkel, Susanne; Schenck, Wolfram

    2017-01-01

    NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling. PMID:28701946

  4. Exploring the Lived Experiences of Participants in Simulation-Based Learning Activities

    ERIC Educational Resources Information Center

    Beard, Rachael

    2013-01-01

    There is currently a small body of research on the experiences of participants, both facilitators and learners, during simulated mock codes (cardiac arrest) in the healthcare setting. This study was based on a practitioner's concerns that mock codes are facilitated differently among educators, mock codes are not aligned with andragogy theory of…

  5. Smoothed Particle Hydrodynamic Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-10-05

    This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.

  6. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    NASA Astrophysics Data System (ADS)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  7. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  8. Accuracy of the hypothetical sky-polarimetric Viking navigation versus sky conditions: revealing solar elevations and cloudinesses favourable for this navigation method.

    PubMed

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Blahó, Miklós; Egri, Ádám; Szabó, Gyula; Horváth, Gábor

    2017-09-01

    According to Thorkild Ramskou's theory proposed in 1967, under overcast and foggy skies, Viking seafarers might have used skylight polarization analysed with special crystals called sunstones to determine the position of the invisible Sun. After finding the occluded Sun with sunstones, its elevation angle had to be measured and its shadow had to be projected onto the horizontal surface of a sun compass. According to Ramskou's theory, these sunstones might have been birefringent calcite or dichroic cordierite or tourmaline crystals working as polarizers. It has frequently been claimed that this method might have been suitable for navigation even in cloudy weather. This hypothesis has been accepted and frequently cited for decades without any experimental support. In this work, we determined the accuracy of this hypothetical sky-polarimetric Viking navigation for 1080 different sky situations characterized by solar elevation θ and cloudiness ρ , the sky polarization patterns of which were measured by full-sky imaging polarimetry. We used the earlier measured uncertainty functions of the navigation steps 1, 2 and 3 for calcite, cordierite and tourmaline sunstone crystals, respectively, and the newly measured uncertainty function of step 4 presented here. As a result, we revealed the meteorological conditions under which Vikings could have used this hypothetical navigation method. We determined the solar elevations at which the navigation uncertainties are minimal at summer solstice and spring equinox for all three sunstone types. On average, calcite sunstone ensures a more accurate sky-polarimetric navigation than tourmaline and cordierite. However, in some special cases (generally at 35° ≤  θ  ≤ 40°, 1 okta ≤  ρ  ≤ 6 oktas for summer solstice, and at 20° ≤  θ  ≤ 25°, 0 okta ≤  ρ  ≤ 4 oktas for spring equinox), the use of tourmaline and cordierite results in smaller navigation uncertainties than that of calcite. Generally, under clear or less cloudy skies, the sky-polarimetric navigation is more accurate, but at low solar elevations its accuracy remains relatively large even at high cloudiness. For a given ρ , the absolute value of averaged peak North uncertainties dramatically decreases with increasing θ until the sign (±) change of these uncertainties. For a given θ , this absolute value can either decrease or increase with increasing ρ . The most advantageous sky situations for this navigation method are at summer solstice when the solar elevation and cloudiness are 35° ≤  θ  ≤ 40° and 2 oktas ≤  ρ  ≤ 3 oktas.

  9. Collaborative Research: Cloudiness transitions within shallow marine clouds near the Azores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mechem, David B.; de Szoeke, Simon P.; Yuter, Sandra E.

    Marine stratocumulus clouds are low, persistent, liquid phase clouds that cover large areas and play a significant role in moderating the climate by reflecting large quantities of incoming solar radiation. The deficiencies in simulating these clouds in global climate models are widely recognized. Much of the uncertainty arises from sub-grid scale variability in the cloud albedo that is not accurately parameterized in climate models. The Clouds, Aerosol and Precipitation in the Marine Boundary Layer (CAP–MBL) observational campaign and the ongoing ARM site measurements on Graciosa Island in the Azores aim to sample the Northeast Atlantic low cloud regime. These datamore » represent, the longest continuous research quality cloud radar/lidar/radiometer/aerosol data set of open-ocean shallow marine clouds in existence. Data coverage from CAP–MBL and the series of cruises to the southeast Pacific culminating in VOCALS will both be of sufficient length to contrast the two low cloud regimes and explore the joint variability of clouds in response to several environmental factors implicated in cloudiness transitions. Our research seeks to better understand cloud system processes in an underexplored but climatologically important maritime region. Our primary goal is an improved physical understanding of low marine clouds on temporal scales of hours to days. It is well understood that aerosols, synoptic-scale forcing, surface fluxes, mesoscale dynamics, and cloud microphysics all play a role in cloudiness transitions. However, the relative importance of each mechanism as a function of different environmental conditions is unknown. To better understand cloud forcing and response, we are documenting the joint variability of observed environmental factors and associated cloud characteristics. In order to narrow the realm of likely parameter ranges, we assess the relative importance of parameter conditions based primarily on two criteria: how often the condition occurs (frequency) and to what degree varying that condition within its typically observed range affects cloud characteristics (magnitude of impact given the condition). In this manner we will be able to address the relative importance of individual factors within a multivariate range of environmental conditions. We will determine the relative roles of the thermodynamic, aerosol, and synoptic environmental factors on low cloud and drizzle formation and lifetime.« less

  10. On the Interaction between Marine Boundary Layer Cellular Cloudiness and Surface Heat Fluxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kazil, J.; Feingold, G.; Wang, Hailong

    2014-01-02

    The interaction between marine boundary layer cellular cloudiness and surface uxes of sensible and latent heat is investigated. The investigation focuses on the non-precipitating closed-cell state and the precipitating open-cell state at low geostrophic wind speed. The Advanced Research WRF model is used to conduct cloud-system-resolving simulations with interactive surface fluxes of sensible heat, latent heat, and of sea salt aerosol, and with a detailed representation of the interaction between aerosol particles and clouds. The mechanisms responsible for the temporal evolution and spatial distribution of the surface heat fluxes in the closed- and open-cell state are investigated and explained. Itmore » is found that the horizontal spatial structure of the closed-cell state determines, by entrainment of dry free tropospheric air, the spatial distribution of surface air temperature and water vapor, and, to a lesser degree, of the surface sensible and latent heat flux. The synchronized dynamics of the the open-cell state drives oscillations in surface air temperature, water vapor, and in the surface fluxes of sensible and latent heat, and of sea salt aerosol. Open-cell cloud formation, cloud optical depth and liquid water path, and cloud and rain water path are identified as good predictors of the spatial distribution of surface air temperature and sensible heat flux, but not of surface water vapor and latent heat flux. It is shown that by enhancing the surface sensible heat flux, the open-cell state creates conditions by which it is maintained. While the open-cell state under consideration is not depleted in aerosol, and is insensitive to variations in sea-salt fluxes, it also enhances the sea-salt flux relative to the closed-cell state. In aerosol-depleted conditions, this enhancement may replenish the aerosol needed for cloud formation, and hence contribute to the perpetuation of the open-cell state as well. Spatial homogenization of the surface fluxes is found to have only a small effect on cloud properties in the investigated cases. This indicates that sub-grid scale spatial variability in the surface flux of sensible and latent heat and of sea salt aerosol may not be required in large scale and global models to describe marine boundary layer cellular cloudiness.« less

  11. Parallel Grand Canonical Monte Carlo (ParaGrandMC) Simulation Code

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin I.

    2016-01-01

    This report provides an overview of the Parallel Grand Canonical Monte Carlo (ParaGrandMC) simulation code. This is a highly scalable parallel FORTRAN code for simulating the thermodynamic evolution of metal alloy systems at the atomic level, and predicting the thermodynamic state, phase diagram, chemical composition and mechanical properties. The code is designed to simulate multi-component alloy systems, predict solid-state phase transformations such as austenite-martensite transformations, precipitate formation, recrystallization, capillary effects at interfaces, surface absorption, etc., which can aid the design of novel metallic alloys. While the software is mainly tailored for modeling metal alloys, it can also be used for other types of solid-state systems, and to some degree for liquid or gaseous systems, including multiphase systems forming solid-liquid-gas interfaces.

  12. Some issues and subtleties in numerical simulation of X-ray FEL's

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    Part of the overall design effort for x-ray FEL's such as the LCLS and TESLA projects has involved extensive use of particle simulation codes to predict their output performance and underlying sensitivity to various input parameters (e.g. electron beam emittance). This paper discusses some of the numerical issues that must be addressed by simulation codes in this regime. We first give a brief overview of the standard approximations and simulation methods adopted by time-dependent(i.e. polychromatic) codes such as GINGER, GENESIS, and FAST3D, including the effects of temporal discretization and the resultant limited spectral bandpass,and then discuss the accuracies and inaccuraciesmore » of these codes in predicting incoherent spontaneous emission (i.e. the extremely low gain regime).« less

  13. Unstable behaviour of an upper ocean-atmosphere coupled model: role of atmospheric radiative processes and oceanic heat transport

    NASA Astrophysics Data System (ADS)

    Cohen-Solal, E.; Le Treut, H.

    We describe the initial bias of the climate simulated by a coupled ocean-atmosphere model. The atmospheric component is a state-of-the-art atmospheric general circulation model, whereas the ocean component is limited to the upper ocean and includes a mixed layer whose depth is computed by the model. As the full ocean general circulation is not computed by the model, the heat transport within the ocean is prescribed. When modifying the prescribed heat transport we also affect the initial drift of the model. We analyze here one of the experiments where this drift is very strong, in order to study the key processes relating the changes in the ocean transport and the evolution of the model's climate. In this simulation, the ocean surface temperature cools by 1.5°C in 20 y. We can distinguish two different phases. During the first period of 5 y, the sea surface temperatures become cooler, particularly in the intertropical area, but the outgoing longwave radiation at the top-of-the-atmosphere increases very quickly, in particular at the end of the period. An off-line version of the model radiative code enables us to decompose this behaviour into different contributions (cloudiness, specific humidity, air and surface temperatures, surface albedo). This partitioning shows that the longwave radiation evolution is due to a decrease of high level cirrus clouds in the intertropical troposphere. The decrease of the cloud cover also leads to a decrease of the planetary albedo and therefore an increase of the net short wave radiation absorbed by the system. But the dominant factor is the strong destabilization by the longwave cooling, which is able to throw the system out of equilibrium. During the remaining of the simulation (second phase), the cooling induced by the destabilization at the top-of-the-atmosphere is transmitted to the surface by various processes of the climate system. Hence, we show that small variations of ocean heat transport can force the model from a stable to an unstable state via atmospheric processes which arise wen the tropics are cooling. Even if possibly overestimated by our GCM, this mechanism may be pertinent to the maintenance of present climatic conditions in the tropics. The simplifications inherent in our model's design allow us to investigate the mechanism in some detail.

  14. Exploring the Ability of a Coarse-grained Potential to Describe the Stress-strain Response of Glassy Polystyrene

    DTIC Science & Technology

    2012-10-01

    using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS

  15. Reconciling Simulated and Observed Views of Clouds: MODIS, ISCCP, and the Limits or Instrument Simulators

    NASA Technical Reports Server (NTRS)

    Ackerman, Steven A.; Hemler, Richard S.; Hofman, Robert J. Patrick; Pincus, Robert; Platnick, Steven

    2011-01-01

    The properties of clouds that may be observed by satellite instruments, such as optical depth and cloud top pressure, are only loosely related to the way clouds m-e represented in models of the atmosphere. One way to bridge this gap is through "instrument simulators," diagnostic tools that map the model representation to synthetic observations so that differences between simulator output and observations can be interpreted unambiguously as model error. But simulators may themselves be restricted by limited information available from the host model or by internal assumptions. This paper considers the extent to which instrument simulators are able to capture essential differences between MODIS and ISCCP, two similar but independent estimates of cloud properties. The authors review the measurements and algorithms underlying these two cloud climatologies, introduce a MODIS simulator, and detail data sets developed for comparison with global models using ISCCP and MODIS simulators, In nature MODIS observes less mid-level doudines!> than ISCCP, consistent with the different methods used to determine cloud top pressure; aspects of this difference are reproduced by the simulators running in a climate modeL But stark differences between MODIS and ISCCP observations of total cloudiness and the distribution of cloud optical thickness can be traced to different approaches to marginal pixels, which MODIS excludes and ISCCP treats as homogeneous. These pixels, which likely contain broken clouds, cover about 15 k of the planet and contain almost all of the optically thinnest clouds observed by either instrument. Instrument simulators can not reproduce these differences because the host model does not consider unresolved spatial scales and so can not produce broken pixels. Nonetheless, MODIS and ISCCP observation are consistent for all but the optically-thinnest clouds, and models can be robustly evaluated using instrument simulators by excluding ambiguous observations.

  16. Global MHD simulation of magnetosphere using HPF

    NASA Astrophysics Data System (ADS)

    Ogino, T.

    We have translated a 3-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A 3-dimensional global MHD simulation of the earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5% using 56 PEs of Fujitsu VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran.

  17. WEC3: Wave Energy Converter Code Comparison Project: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien

    This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less

  18. Free-tropospheric BrO investigations based on GOME

    NASA Astrophysics Data System (ADS)

    Post, P.; van Roozendael, M.; Backman, L.; Damski, J.; Thölix, L.; Fayt, C.; Taalas, P.

    2003-04-01

    Bromine compounds contribute significantly to the stratospheric ozone depletion. However measurements of most bromine compounds are sparse or non-existent, and experimental studies essentially rely on BrO observations. The differences between balloon and ground based measurements of stratospheric BrO columns and satellite total column measurements are too large to be explained by measurement uncertainties. Therefore, it has been assumed that there is a concentration of BrO in the free troposphere of about 1-3 ppt. In a previous work, we have calculated the tropospheric BrO abundance as the difference between total BrO and stratospheric BrO columns. The total vertical column densities of BrO are extracted from GOME measurements using IASB-BIRA algorithms. The stratospheric amount has been calculated using chemical transport models (CTM). Results from SLIMCAT and FinROSE simulations are used for this purpose. SLIMCAT is a widely used 3D CTM that has been tested against balloon measurements. FinROSE is a 3D CTM developed at FMI. We have tried several different tropospheric BrO profiles. Our results show that a profile with high BrO concentrations in the boundary layer usually gives unrealistically high tropospheric column values over areas of low albedo (like oceans). This suggests that the tropospheric BrO would be predominantly distributed in the free troposphere. In this work, attempts are made to identify the signature of a free tropospheric BrO content when comparing cloudy and non-cloudy scenes. The possible impact of orography on measured BrO columns is also investigated.

  19. Validation of a weather forecast model at radiance level against satellite observations allowing quantification of temperature, humidity, and cloud-related biases

    NASA Astrophysics Data System (ADS)

    Bani Shahabadi, Maziar; Huang, Yi; Garand, Louis; Heilliette, Sylvain; Yang, Ping

    2016-09-01

    An established radiative transfer model (RTM) is adapted for simulating all-sky infrared radiance spectra from the Canadian Global Environmental Multiscale (GEM) model in order to validate its forecasts at the radiance level against Atmospheric InfraRed Sounder (AIRS) observations. Synthetic spectra are generated for 2 months from short-term (3-9 h) GEM forecasts. The RTM uses a monthly climatological land surface emissivity/reflectivity atlas. An updated ice particle optical property library was introduced for cloudy radiance calculations. Forward model brightness temperature (BT) biases are assessed to be of the order of ˜1 K for both clear-sky and overcast conditions. To quantify GEM forecast meteorological variables biases, spectral sensitivity kernels are generated and used to attribute radiance biases to surface and atmospheric temperatures, atmospheric humidity, and clouds biases. The kernel method, supplemented with retrieved profiles based on AIRS observations in collocation with a microwave sounder, achieves good closure in explaining clear-sky radiance biases, which are attributed mostly to surface temperature and upper tropospheric water vapor biases. Cloudy-sky radiance biases are dominated by cloud-induced radiance biases. Prominent GEM biases are identified as: (1) too low surface temperature over land, causing about -5 K bias in the atmospheric window region; (2) too high upper tropospheric water vapor, inducing about -3 K bias in the water vapor absorption band; (3) too few high clouds in the convective regions, generating about +10 K bias in window band and about +6 K bias in the water vapor band.

  20. A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction

    NASA Astrophysics Data System (ADS)

    Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur

    2009-07-01

    For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).

  1. Towards Improved Radiative Transfer Simulations of Hyperspectral Measurements for Cloudy Atmospheres

    NASA Astrophysics Data System (ADS)

    Natraj, V.; Li, C.; Aumann, H. H.; Yung, Y. L.

    2016-12-01

    Usage of hyperspectral measurements in the infrared for weather forecasting requires radiative transfer (RT) models that can accurately compute radiances given the atmospheric state. On the other hand, it is necessary for the RT models to be fast enough to meet operational processing processing requirements. Until recently, this has proven to be a very hard challenge. In the last decade, however, significant progress has been made in this regard, due to computer speed increases, and improved and optimized RT models. This presentation will introduce a new technique, based on principal component analysis (PCA) of the inherent optical properties (such as profiles of trace gas absorption and single scattering albedo), to perform fast and accurate hyperspectral RT calculations in clear or cloudy atmospheres. PCA is a technique to compress data while capturing most of the variability in the data. By performing PCA on the optical properties, we limit the number of computationally expensive multiple scattering RT calculations to the PCA-reduced data set, and develop a series of PC-based correction factors to obtain the hyperspectral radiances. This technique has been showed to deliver accuracies of 0.1% of better with respect to brute force, line-by-line (LBL) models such as LBLRTM and DISORT, but is orders of magnitude faster than the LBL models. We will compare the performance of this method against other models on a large atmospheric state data set (7377 profiles) that includes a wide range of thermodynamic and cloud profiles, along with viewing geometry and surface emissivity information. 2016. All rights reserved.

  2. Foehn-induced effects on local dust pollution, frontal clouds and solar radiation in the Dead Sea valley

    NASA Astrophysics Data System (ADS)

    Kishcha, Pavel; Starobinets, Boris; Savir, Amit; Alpert, Pinhas; Kaplan, Michael

    2018-06-01

    Despite the long history of investigation of foehn phenomena, there are few studies of the influence of foehn winds on air pollution and none in the Dead Sea valley. For the first time the foehn phenomenon and its effects on local dust pollution, frontal cloudiness and surface solar radiation were analyzed in the Dead Sea valley, as it occurred on 22 March 2013. This was carried out using both numerical simulations and observations. The foehn winds intensified local dust emissions, while the foehn-induced temperature inversion trapped dust particles beneath this inversion. These two factors caused extreme surface dust concentration in the western Dead Sea valley. The dust pollution was transported by west winds eastward, to the central Dead Sea valley, where the speed of these winds sharply decreased. The transported dust was captured by the ascending airflow contributing to the maximum aerosol optical depth (AOD) over the central Dead Sea valley. On the day under study, the maximum surface dust concentration did not coincide with the maximum AOD: this being one of the specific effects of the foehn phenomenon on dust pollution in the Dead Sea valley. Radar data showed a passage of frontal cloudiness through the area of the Dead Sea valley leading to a sharp drop in noon solar radiation. The descending airflow over the downwind side of the Judean Mountains led to the formation of a cloud-free band followed by only the partial recovery of solar radiation because of the extreme dust pollution caused by foehn winds.

  3. Minimization of Impact from Electric Vehicle Supply Equipment to the Electric Grid Using a Dynamically Controlled Battery Bank for Peak Load Shaving

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castello, Charles C

    This research presents a comparison of two control systems for peak load shaving using local solar power generation (i.e., photovoltaic array) and local energy storage (i.e., battery bank). The purpose is to minimize load demand of electric vehicle supply equipment (EVSE) on the electric grid. A static and dynamic control system is compared to decrease demand from EVSE. Static control of the battery bank is based on charging and discharging to the electric grid at fixed times. Dynamic control, with 15-minute resolution, forecasts EVSE load based on data analysis of collected data. In the proposed dynamic control system, the sigmoidmore » function is used to shave peak loads while limiting scenarios that can quickly drain the battery bank. These control systems are applied to Oak Ridge National Laboratory s (ORNL) solar-assisted electric vehicle (EV) charging stations. This installation is composed of three independently grid-tied sub-systems: (1) 25 EVSE; (2) 47 kW photovoltaic (PV) array; and (3) 60 kWh battery bank. The dynamic control system achieved the greatest peak load shaving, up to 34% on a cloudy day and 38% on a sunny day. The static control system was not ideal; peak load shaving was 14.6% on a cloudy day and 12.7% on a sunny day. Simulations based on ORNL data shows solar-assisted EV charging stations combined with the proposed dynamic battery control system can negate up to 89% of EVSE load demand on sunny days.« less

  4. Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti)more » by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.« less

  5. Multi-Region Boundary Element Analysis for Coupled Thermal-Fracturing Processes in Geomaterials

    NASA Astrophysics Data System (ADS)

    Shen, Baotang; Kim, Hyung-Mok; Park, Eui-Seob; Kim, Taek-Kon; Wuttke, Manfred W.; Rinne, Mikael; Backers, Tobias; Stephansson, Ove

    2013-01-01

    This paper describes a boundary element code development on coupled thermal-mechanical processes of rock fracture propagation. The code development was based on the fracture mechanics code FRACOD that has previously been developed by Shen and Stephansson (Int J Eng Fracture Mech 47:177-189, 1993) and FRACOM (A fracture propagation code—FRACOD, User's manual. FRACOM Ltd. 2002) and simulates complex fracture propagation in rocks governed by both tensile and shear mechanisms. For the coupled thermal-fracturing analysis, an indirect boundary element method, namely the fictitious heat source method, was implemented in FRACOD to simulate the temperature change and thermal stresses in rocks. This indirect method is particularly suitable for the thermal-fracturing coupling in FRACOD where the displacement discontinuity method is used for mechanical simulation. The coupled code was also extended to simulate multiple region problems in which rock mass, concrete linings and insulation layers with different thermal and mechanical properties were present. Both verification and application cases were presented where a point heat source in a 2D infinite medium and a pilot LNG underground cavern were solved and studied using the coupled code. Good agreement was observed between the simulation results, analytical solutions and in situ measurements which validates an applicability of the developed coupled code.

  6. Investigation of Different Constituent Encoders in a Turbo-code Scheme for Reduced Decoder Complexity

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.

    1998-01-01

    A large number of papers have been published attempting to give some analytical basis for the performance of Turbo-codes. It has been shown that performance improves with increased interleaver length. Also procedures have been given to pick the best constituent recursive systematic convolutional codes (RSCC's). However testing by computer simulation is still required to verify these results. This thesis begins by describing the encoding and decoding schemes used. Next simulation results on several memory 4 RSCC's are shown. It is found that the best BER performance at low E(sub b)/N(sub o) is not given by the RSCC's that were found using the analytic techniques given so far. Next the results are given from simulations using a smaller memory RSCC for one of the constituent encoders. Significant reduction in decoding complexity is obtained with minimal loss in performance. Simulation results are then given for a rate 1/3 Turbo-code with the result that this code performed as well as a rate 1/2 Turbo-code as measured by the distance from their respective Shannon limits. Finally the results of simulations where an inaccurate noise variance measurement was used are given. From this it was observed that Turbo-decoding is fairly stable with regard to noise variance measurement.

  7. Analysis and Simulation of Narrowband GPS Jamming Using Digital Excision Temporal Filtering.

    DTIC Science & Technology

    1994-12-01

    the sequence of stored values from the P- code sampled at a 20 MHz rate. When correlated with a reference vector of the same length to simulate a GPS ...rate required for the GPS signals, (20 MHz sampling rate for the P- code signal), the personal computer (PC) used run the simulation could not perform...This subroutine is used to perform a fast FFT based 168 biased cross correlation . Written by Capt Gerry Falen, USAF, 16 AUG 94 % start of code

  8. Implementation issues in source coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.

    1989-01-01

    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.

  9. Large Eddy Simulation of Flow in Turbine Cascades Using LESTool and UNCLE Codes

    NASA Technical Reports Server (NTRS)

    Huang, P. G.

    2004-01-01

    During the period December 23,1997 and December August 31,2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Spalart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.

  10. Large Eddy Simulation of Flow in Turbine Cascades Using LEST and UNCLE Codes

    NASA Technical Reports Server (NTRS)

    Ashpis, David (Technical Monitor); Huang, P. G.

    2004-01-01

    During the period December 23, 1997 and December August 31, 2004, we accomplished the development of 2 CFD codes for DNS/LES/RANS simulation of turbine cascade flows, namely LESTool and UNCLE. LESTool is a structured code making use of 5th order upwind differencing scheme and UNCLE is a second-order-accuracy unstructured code. LESTool has both Dynamic SGS and Sparlart's DES models and UNCLE makes use of URANS and DES models. The current report provides a description of methodologies used in the codes.

  11. QR code for medical information uses.

    PubMed

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  12. AG Dra -- a high density plasma laboratory

    NASA Astrophysics Data System (ADS)

    Young, Peter

    2002-07-01

    A STIS observation of the symbiotic star AG Draconis yielding spectra in the range 1150--10 000 Angstrom is requested. AG Dra is a non-eclipsing binary that shows strong, narrow nebular emission lines that originate in the wind of a K giant, photoionized by a hot white dwarf. The density of the nebula is around 10^10 electrons/cm^3 and is the perfect laboratory for testing the plasma modeling codes cloudy and xstar at high densities. These codes are used for a wide range of astrophysical objects including stellar winds, accretion disks, active galactic nuclei and Seyfert galaxies, and calibrating them against high signal-to-noise spectra from comparatively simple systems is essential. AG Dra is the perfect high density laboratory for this work. In addition, many previously undetected emission lines will be found through the high sensitivity of STIS, which will allow new plasma diagnostics to be tested. These twin objectives are particularly pertinent as the high sensitivity of emphHST/COS will will permit similar high resolution spectroscopy to be applied to a whole new regime of extragalactic objects. By combining far-UV data from Ause with complementary data from STIS, we will determine ratios of emission lines from the same ion, or ions of similar ionization level. These will permit a more complete set of diagnostics than are obtainable from one instrument alone.

  13. Nebular Continuum and Line Emission in Stellar Population Synthesis Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byler, Nell; Dalcanton, Julianne J.; Conroy, Charlie

    Accounting for nebular emission when modeling galaxy spectral energy distributions (SEDs) is important, as both line and continuum emissions can contribute significantly to the total observed flux. In this work, we present a new nebular emission model integrated within the Flexible Stellar Population Synthesis code that computes the line and continuum emission for complex stellar populations using the photoionization code Cloudy. The self-consistent coupling of the nebular emission to the matched ionizing spectrum produces emission line intensities that correctly scale with the stellar population as a function of age and metallicity. This more complete model of galaxy SEDs will improvemore » estimates of global gas properties derived with diagnostic diagrams, star formation rates based on H α , and physical properties derived from broadband photometry. Our models agree well with results from other photoionization models and are able to reproduce observed emission from H ii regions and star-forming galaxies. Our models show improved agreement with the observed H ii regions in the Ne iii/O ii plane and show satisfactory agreement with He ii emission from z = 2 galaxies, when including rotating stellar models. Models including post-asymptotic giant branch stars are able to reproduce line ratios consistent with low-ionization emission regions. The models are integrated into current versions of FSPS and include self-consistent nebular emission predictions for MIST and Padova+Geneva evolutionary tracks.« less

  14. CBP Toolbox Version 3.0 “Beta Testing” Performance Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, III, F. G.

    2016-07-29

    One function of the Cementitious Barriers Partnership (CBP) is to assess available models of cement degradation and to assemble suitable models into a “Toolbox” that would be made available to members of the partnership, as well as the DOE Complex. To this end, SRNL and Vanderbilt University collaborated to develop an interface using the GoldSim software to the STADIUM @ code developed by SIMCO Technologies, Inc. and LeachXS/ORCHESTRA developed by Energy research Centre of the Netherlands (ECN). Release of Version 3.0 of the CBP Toolbox is planned in the near future. As a part of this release, an increased levelmore » of quality assurance for the partner codes and the GoldSim interface has been developed. This report documents results from evaluation testing of the ability of CBP Toolbox 3.0 to perform simulations of concrete degradation applicable to performance assessment of waste disposal facilities. Simulations of the behavior of Savannah River Saltstone Vault 2 and Vault 1/4 concrete subject to sulfate attack and carbonation over a 500- to 1000-year time period were run using a new and upgraded version of the STADIUM @ code and the version of LeachXS/ORCHESTRA released in Version 2.0 of the CBP Toolbox. Running both codes allowed comparison of results from two models which take very different approaches to simulating cement degradation. In addition, simulations of chloride attack on the two concretes were made using the STADIUM @ code. The evaluation sought to demonstrate that: 1) the codes are capable of running extended realistic simulations in a reasonable amount of time; 2) the codes produce “reasonable” results; the code developers have provided validation test results as part of their code QA documentation; and 3) the two codes produce results that are consistent with one another. Results of the evaluation testing showed that the three criteria listed above were met by the CBP partner codes. Therefore, it is concluded that the codes can be used to support performance assessment. This conclusion takes into account the QA documentation produced for the partner codes and for the CBP Toolbox.« less

  15. Preparation macroconstants to simulate the core of VVER-1000 reactor

    NASA Astrophysics Data System (ADS)

    Seleznev, V. Y.

    2017-01-01

    Dynamic model is used in simulators of VVER-1000 reactor for training of operating staff and students. As a code for the simulation of neutron-physical characteristics is used DYNCO code that allows you to perform calculations of stationary, transient and emergency processes in real time to a different geometry of the reactor lattices [1]. To perform calculations using this code, you need to prepare macroconstants for each FA. One way of getting macroconstants is to use the WIMS code, which is based on the use of its own 69-group macroconstants library. This paper presents the results of calculations of FA obtained by the WIMS code for VVER-1000 reactor with different parameters of fuel and coolant, as well as the method of selection of energy groups for further calculation macroconstants.

  16. Physical models, cross sections, and numerical approximations used in MCNP and GEANT4 Monte Carlo codes for photon and electron absorbed fraction calculation.

    PubMed

    Yoriyaz, Hélio; Moralles, Maurício; Siqueira, Paulo de Tarso Dalledone; Guimarães, Carla da Costa; Cintra, Felipe Belonsi; dos Santos, Adimir

    2009-11-01

    Radiopharmaceutical applications in nuclear medicine require a detailed dosimetry estimate of the radiation energy delivered to the human tissues. Over the past years, several publications addressed the problem of internal dose estimate in volumes of several sizes considering photon and electron sources. Most of them used Monte Carlo radiation transport codes. Despite the widespread use of these codes due to the variety of resources and potentials they offered to carry out dose calculations, several aspects like physical models, cross sections, and numerical approximations used in the simulations still remain an object of study. Accurate dose estimate depends on the correct selection of a set of simulation options that should be carefully chosen. This article presents an analysis of several simulation options provided by two of the most used codes worldwide: MCNP and GEANT4. For this purpose, comparisons of absorbed fraction estimates obtained with different physical models, cross sections, and numerical approximations are presented for spheres of several sizes and composed as five different biological tissues. Considerable discrepancies have been found in some cases not only between the different codes but also between different cross sections and algorithms in the same code. Maximum differences found between the two codes are 5.0% and 10%, respectively, for photons and electrons. Even for simple problems as spheres and uniform radiation sources, the set of parameters chosen by any Monte Carlo code significantly affects the final results of a simulation, demonstrating the importance of the correct choice of parameters in the simulation.

  17. Real-time visual simulation of APT system based on RTW and Vega

    NASA Astrophysics Data System (ADS)

    Xiong, Shuai; Fu, Chengyu; Tang, Tao

    2012-10-01

    The Matlab/Simulink simulation model of APT (acquisition, pointing and tracking) system is analyzed and established. Then the model's C code which can be used for real-time simulation is generated by RTW (Real-Time Workshop). Practical experiments show, the simulation result of running the C code is the same as running the Simulink model directly in the Matlab environment. MultiGen-Vega is a real-time 3D scene simulation software system. With it and OpenGL, the APT scene simulation platform is developed and used to render and display the virtual scenes of the APT system. To add some necessary graphics effects to the virtual scenes real-time, GLSL (OpenGL Shading Language) shaders are used based on programmable GPU. By calling the C code, the scene simulation platform can adjust the system parameters on-line and get APT system's real-time simulation data to drive the scenes. Practical application shows that this visual simulation platform has high efficiency, low charge and good simulation effect.

  18. Nexus: A modular workflow management system for quantum simulation codes

    NASA Astrophysics Data System (ADS)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  19. Turbulence dissipation challenge: particle-in-cell simulations

    NASA Astrophysics Data System (ADS)

    Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.

    2015-12-01

    We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.

  20. Combining ground-based microwave radiometer and the AROME convective scale model through 1DVAR retrievals in complex terrain: an Alpine valley case study

    NASA Astrophysics Data System (ADS)

    Martinet, Pauline; Cimini, Domenico; De Angelis, Francesco; Canut, Guylaine; Unger, Vinciane; Guillot, Remi; Tzanos, Diane; Paci, Alexandre

    2017-09-01

    A RPG-HATPRO ground-based microwave radiometer (MWR) was operated in a deep Alpine valley during the Passy-2015 field campaign. This experiment aims to investigate how stable boundary layers during wintertime conditions drive the accumulation of pollutants. In order to understand the atmospheric processes in the valley, MWRs continuously provide vertical profiles of temperature and humidity at a high time frequency, providing valuable information to follow the evolution of the boundary layer. A one-dimensional variational (1DVAR) retrieval technique has been implemented during the field campaign to optimally combine an MWR and 1 h forecasts from the French convective scale model AROME. Retrievals were compared to radiosonde data launched at least every 3 h during two intensive observation periods (IOPs). An analysis of the AROME forecast errors during the IOPs has shown a large underestimation of the surface cooling during the strongest stable episode. MWR brightness temperatures were monitored against simulations from the radiative transfer model ARTS2 (Atmospheric Radiative Transfer Simulator) and radiosonde launched during the field campaign. Large errors were observed for most transparent channels (i.e., 51-52 GHz) affected by absorption model and calibration uncertainties while a good agreement was found for opaque channels (i.e., 54-58 GHz). Based on this monitoring, a bias correction of raw brightness temperature measurements was applied before the 1DVAR retrievals. 1DVAR retrievals were found to significantly improve the AROME forecasts up to 3 km but mainly below 1 km and to outperform usual statistical regressions above 1 km. With the present implementation, a root-mean-square error (RMSE) of 1 K through all the atmospheric profile was obtained with values within 0.5 K below 500 m in clear-sky conditions. The use of lower elevation angles (up to 5°) in the MWR scanning and the bias correction were found to improve the retrievals below 1000 m. MWR retrievals were found to catch deep near-surface temperature inversions very well. Larger errors were observed in cloudy conditions due to the difficulty of ground-based MWRs to resolve high level inversions that are still challenging. Finally, 1DVAR retrievals were optimized for the analysis of the IOPs by using radiosondes as backgrounds in the 1DVAR algorithm instead of the AROME forecasts. A significant improvement of the retrievals in cloudy conditions and below 1000 m in clear-sky conditions was observed. From this study, we can conclude that MWRs are expected to bring valuable information into numerical weather prediction models up to 3 km in altitude both in clear-sky and cloudy-sky conditions with the maximum improvement found around 500 m. With an accuracy between 0.5 and 1 K in RMSE, our study has also proven that MWRs are capable of resolving deep near-surface temperature inversions observed in complex terrain during highly stable boundary layer conditions.

  1. The representation of low-level clouds during the West African monsoon in weather and climate models

    NASA Astrophysics Data System (ADS)

    Kniffka, Anke; Hannak, Lisa; Knippertz, Peter; Fink, Andreas

    2016-04-01

    The West African monsoon is one of the most important large-scale circulation features in the tropics and the associated seasonal rainfalls are crucial to rain-fed agriculture and water resources for hundreds of millions of people. However, numerical weather and climate models still struggle to realistically represent salient features of the monsoon across a wide range of scales. Recently it has been shown that substantial errors in radiation and clouds exist in the southern parts of West Africa (8°W-8°E, 5-10°N) during summer. This area is characterised by strong low-level jets associated with the formation of extensive ultra-low stratus clouds. Often persisting long after sunrise, these clouds have a substantial impact on the radiation budget at the surface and thus the diurnal evolution of the planetary boundary layer (PBL). Here we present some first results from a detailed analysis of the representation of these clouds and the associated PBL features across a range of weather and climate models. Recent climate model simulations for the period 1991-2010 run in the framework of the Year of Tropical Convection (YOTC) offer a great opportunity for this analysis. The models are those used for the latest Assessment Report of the Intergovernmental Panel on Climate Change, but for YOTC the model output has a much better temporal resolution, allowing to resolve the diurnal cycle, and includes diabatic terms, allowing to much better assess physical reasons for errors in low-level temperature, moisture and thus cloudiness. These more statistical climate model analyses are complemented by experiments using ICON (Icosahedral non-hydrostatic general circulation model), the new numerical weather prediction model of the German Weather Service and the Max Planck Institute for Meteorology. ICON allows testing sensitivities to model resolution and numerical schemes. These model simulations are validated against (re-)analysis data, satellite observations (e.g. CM SAF cloud and radiation data) and ground-based eye observations of clouds and radiation measurements from weather stations. Our results show that many of the climate models have great difficulties representing the diurnal cycle of winds and clouds, leading to associated errors in radiation. Typical errors include a substantial underestimation of the lowest clouds accompanied by an overestimation of clouds at the top of the monsoon layer, indicating systematic problems in vertical exchange processes, which are also reflected in large errors in jet speed. Consequently, many models show a too flat diurnal cycle in cloudiness. This contribution is part of the EU-funded DACCIWA (Dynamics-Aerosol-Chemistry-Cloud Interactions in West Africa) project that aims to investigate the impact of the drastic increase in anthropogenic emissions in West Africa on the local weather and climate, for example through cloud-aerosol interactions. The analysis of the capability of state-of-the-art numerical models to represent low-level cloudiness presented here is an important requisite for the planned assessments of the influence of anthropogenic aerosol.

  2. Prediction of emission line fluxes of gravitationally lensed very high-z galaxies

    NASA Astrophysics Data System (ADS)

    Inoue, Akio; Shimizu, Ikkoh; Okamoto, Takashi; Yoshida, Naoki; Matsuo, Hiroshi; Tamura, Yoichi

    2015-08-01

    Spectroscopic confirmation of very high-z galaxy candidates is extremely valuable because this is a direct proof of the existence of galaxies in the early Universe and put a strong constraint on the structure formation theory to produce such galaxies during the limited age of the Universe. Before the completion of the cosmic reionization, hydrogen Ly-alpha emission line is hard to be observed and we need other emission lines to confirm the redshift of galaxies. By using a state-of-the-art cosmological hydrodynamics simulation of galaxy formation and evolution with an emission line model based on Cloudy, we predict the line fluxes of some gravitationally-lensed very high-z galaxy candidates. We also discuss their detectability with the current and future telescopes.

  3. High Resolution IRS Mapping of the Star-Forming Region NGC 6334 A

    NASA Astrophysics Data System (ADS)

    Sarma, Anuj; Abel, Nicholas; Ferland, Gary; Mayo, Elizabeth; Troland, Thomas

    2005-06-01

    Star formation involves the interplay of thermal, gravitational and magnetic forces. These processes lead to a dynamically evolving region in which O stars ionize the surrounding medium, and the ionized gas expands into the molecular cloud. Of these forces, magnetic effects are the least understood. A detailed analysis of the conditions in star-forming environments requires that one combine magnetic field observations with observations of the ionized, atomic, and molecular gas along with dust. We propose to carry out high-resolution IRS spectroscopy between 9.9-37.2 microns of the nearby (1.7 kpc) star-forming region NGC 6334 A. Maps of the magnetic field strength in the molecular gas exist for NGC 6334 A, yet the conditions in the H II region, the surrounding photodissociated region (PDR), and the dynamical interaction between the two regions are poorly understood. In the H II region, our proposed observation will allow us to use well-known infrared diagnostic ratios to determine the electron density, temperature, and the hardness of the continuum source. Spitzer observations of rotational transitions of molecular hydrogen and PAH emission, combined with previous observations, will allow us to determine the hydrogen density, UV radiation flux, and temperature in the PDR. We will combine our observations with theoretical calculations, using the spectral synthesis code Cloudy. Recent improvements to Cloudy include a ~1000 reaction molecular network, the ability to treat the dynamical flow of ionized gas into a molecular cloud, and the effects of magnetic pressure. Matching the observed spectra with theoretical calculations will tell us the physical conditions in the H II region and PDR, the role of magnetic fields in NGC 6334 A, and the importance of dynamics in the region. Overall, IRS observations of NGC 6334 A offers a unique opportunity to study, at high spatial resolution, many of the physical processes in star-forming regions.

  4. The molecular gas reservoir of 6 low-metallicity galaxies from the Herschel Dwarf Galaxy Survey. A ground-based follow-up survey of CO(1-0), CO(2-1), and CO(3-2)

    NASA Astrophysics Data System (ADS)

    Cormier, D.; Madden, S. C.; Lebouteiller, V.; Hony, S.; Aalto, S.; Costagliola, F.; Hughes, A.; Rémy-Ruyer, A.; Abel, N.; Bayet, E.; Bigiel, F.; Cannon, J. M.; Cumming, R. J.; Galametz, M.; Galliano, F.; Viti, S.; Wu, R.

    2014-04-01

    Context. Observations of nearby starburst and spiral galaxies have revealed that molecular gas is the driver of star formation. However, some nearby low-metallicity dwarf galaxies are actively forming stars, but CO, the most common tracer of this reservoir, is faint, leaving us with a puzzle about how star formation proceeds in these environments. Aims: We aim to quantify the molecular gas reservoir in a subset of 6 galaxies from the Herschel Dwarf Galaxy Survey with newly acquired CO data and to link this reservoir to the observed star formation activity. Methods: We present CO(1-0), CO(2-1), and CO(3-2) observations obtained at the ATNF Mopra 22-m, APEX, and IRAM 30-m telescopes, as well as [C ii] 157μm and [O i] 63μm observations obtained with the Herschel/PACS spectrometer in the 6 low-metallicity dwarf galaxies: Haro 11, Mrk 1089, Mrk 930, NGC 4861, NGC 625, and UM 311. We derived their molecular gas masses from several methods, including using the CO-to-H2 conversion factor XCO (both Galactic and metallicity-scaled values) and dust measurements. The molecular and atomic gas reservoirs were compared to the star formation activity. We also constrained the physical conditions of the molecular clouds using the non-LTE code RADEX and the spectral synthesis code Cloudy. Results: We detect CO in 5 of the 6 galaxies, including first detections in Haro 11 (Z ~ 0.4 Z⊙), Mrk 930 (0.2 Z⊙), and UM 311 (0.5 Z⊙), but CO remains undetected in NGC 4861 (0.2 Z⊙). The CO luminosities are low, while [C ii] is bright in these galaxies, resulting in [C ii]/CO(1-0) ≥ 10 000. Our dwarf galaxies are in relatively good agreement with the Schmidt-Kennicutt relation for total gas. They show short molecular depletion timescales, even when considering metallicity-scaled XCO factors. Those galaxies are dominated by their H i gas, except Haro 11, which has high star formation efficiency and is dominated by ionized and molecular gas. We determine the mass of each ISM phase in Haro 11 using Cloudy and estimate an equivalent XCO factor that is 10 times higher than the Galactic value. Overall, our results confirm the emerging picture that CO suffers from significant selective photodissociation in low-metallicity dwarf galaxies.

  5. Recent Developments in the Code RITRACKS (Relativistic Ion Tracks)

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Ponomarev, Artem L.; Blattnig, Steve R.

    2018-01-01

    The code RITRACKS (Relativistic Ion Tracks) was developed to simulate detailed stochastic radiation track structures of ions of different types and energies. Many new capabilities were added to the code during the recent years. Several options were added to specify the times at which the tracks appear in the irradiated volume, allowing the simulation of dose-rate effects. The code has been used to simulate energy deposition in several targets: spherical, ellipsoidal and cylindrical. More recently, density changes as well as a spherical shell were implemented for spherical targets, in order to simulate energy deposition in walled tissue equivalent proportional counters. RITRACKS is used as a part of the new program BDSTracks (Biological Damage by Stochastic Tracks) to simulate several types of chromosome aberrations in various irradiation conditions. The simulation of damage to various DNA structures (linear and chromatin fiber) by direct and indirect effects has been improved and is ongoing. Many improvements were also made to the graphic user interface (GUI), including the addition of several labels allowing changes of units. A new GUI has been added to display the electron ejection vectors. The parallel calculation capabilities, notably the pre- and post-simulation processing on Windows and Linux machines have been reviewed to make them more portable between different systems. The calculation part is currently maintained in an Atlassian Stash® repository for code tracking and possibly future collaboration.

  6. Kinetic modeling of x-ray laser-driven solid Al plasmas via particle-in-cell simulation

    NASA Astrophysics Data System (ADS)

    Royle, R.; Sentoku, Y.; Mancini, R. C.; Paraschiv, I.; Johzaki, T.

    2017-06-01

    Solid-density plasmas driven by intense x-ray free-electron laser (XFEL) radiation are seeded by sources of nonthermal photoelectrons and Auger electrons that ionize and heat the target via collisions. Simulation codes that are commonly used to model such plasmas, such as collisional-radiative (CR) codes, typically assume a Maxwellian distribution and thus instantaneous thermalization of the source electrons. In this study, we present a detailed description and initial applications of a collisional particle-in-cell code, picls, that has been extended with a self-consistent radiation transport model and Monte Carlo models for photoionization and K L L Auger ionization, enabling the fully kinetic simulation of XFEL-driven plasmas. The code is used to simulate two experiments previously performed at the Linac Coherent Light Source investigating XFEL-driven solid-density Al plasmas. It is shown that picls-simulated pulse transmissions using the Ecker-Kröll continuum-lowering model agree much better with measurements than do simulations using the Stewart-Pyatt model. Good quantitative agreement is also found between the time-dependent picls results and those of analogous simulations by the CR code scfly, which was used in the analysis of the experiments to accurately reproduce the observed K α emissions and pulse transmissions. Finally, it is shown that the effects of the nonthermal electrons are negligible for the conditions of the particular experiments under investigation.

  7. Automated Concurrent Blackboard System Generation in C++

    NASA Technical Reports Server (NTRS)

    Kaplan, J. A.; McManus, J. W.; Bynum, W. L.

    1999-01-01

    In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.

  8. Relationship between high daily erythemal UV doses, total ozone, surface albedo and cloudiness: An analysis of 30 years of data from Switzerland and Austria

    NASA Astrophysics Data System (ADS)

    Rieder, H. E.; Staehelin, J.; Weihs, P.; Vuilleumier, L.; Maeder, J. A.; Holawe, F.; Blumthaler, M.; Lindfors, A.; Peter, T.; Simic, S.; Spichtinger, P.; Wagner, J. E.; Walker, D.; Ribatet, M.

    2010-10-01

    This work investigates the occurrence frequency of days with high erythemal UV doses at three stations in Switzerland and Austria (Davos, Hoher Sonnblick and Vienna) for the time period 1974-2003. While several earlier studies have reported on increases in erythemal UV dose up to 10% during the last decades, this study focuses on days with high erythemal UV dose, which is defined as a daily dose at least 15% higher than for 1950s clear-sky conditions (which represent preindustrial conditions with respect to anthropogenic chlorine). Furthermore, the influence of low column ozone, clear-sky/partly cloudy conditions and surface albedo on UV irradiance has been analyzed on annual and seasonal basis. The results of this study show that in the Central Alpine Region the number of days with high UV dose increased strongly in the early 1990s. A large fraction of all days with high UV dose occurring in the period 1974-2003 was found especially during the years 1994-2003, namely 40% at Davos, 54% at Hoher Sonnblick and 65% at Vienna. The importance of total ozone, clear-sky/partly cloudy conditions and surface albedo (e.g. in dependence of snow cover) varies strongly among the seasons. However, overall the interplay of low total ozone and clear-sky/partly cloudy conditions led to the largest fraction of days showing high erythemal UV dose. Furthermore, an analysis of the synoptic weather situation showed that days with high erythemal UV dose, low total ozone and high relative sunshine duration occur at all three stations more frequently during situations with low pressure gradients or southerly advection.

  9. Night-time activity forecast by season and weather in a longitudinal design - natural light effects on three years' rest-activity cycles in nursing home residents with dementia.

    PubMed

    Wahnschaffe, Amely; Nowozin, Claudia; Rath, Andreas; Floessner, Theresa; Appelhoff, Stefan; Münch, Mirjam; Kunz, Dieter

    2017-12-01

    Backround: Night-time agitation is a frequent symptom of dementia. It often causes nursing home admission and has been linked to circadian rhythm disturbances. A positive influence of light interventions on night-time agitation was shown in several studies. The aim of our study was to investigate whether there is a long-term association between regional weather data (as indicator for daylight availability) and 24-hour variations of motor activity. Motor activity of 20 elderly nursing home residents living with dementia was analyzed using recordings of continuously worn wrist activity monitors over a three-year period. The average recording duration was 479 ± 206 days per participant (mean ± SD). Regional cloud amount and day length data from the local weather station (latitude: 52°56'N) were included in the analysis to investigate their effects on several activity variables. Nocturnal rest, here defined as the five consecutive hours with the least motor activity during 24 hours (L5), was the most predictable activity variable per participant. There was a significant interaction of night-time activity with day length and cloud amount (F 1,1174 = 4.39; p = 0.036). Night-time activity was higher on cloudy short days than on clear short days (p = 0.007), and it was also higher on cloudy short days than on cloudy long days (p = 0.032). The need for sufficient zeitgeber (time cue) strength during winter time, especially when days are short and skies are cloudy, is crucial for elderly people living with dementia. Activity forecast by season and weather might be a valuable approach to anticipate adequately complementary use of electrical light and thereby foster lower night-time activity.

  10. Select strengths and biases of models in representing the Arctic winter boundary layer over sea ice: the Larcform 1 single column model intercomparison

    DOE PAGES

    Pithan, Felix; Ackerman, Andrew; Angevine, Wayne M.; ...

    2016-08-27

    We struggle to represent lower tropospheric temperature and moisture profiles and surface fluxes in Artic winter using weather and climate models, partly because they lack or misrepresent physical processes that are specific to high latitudes. Observations have revealed two preferred states of the Arctic winter boundary layer. In the cloudy state, cloud liquid water limits surface radiative cooling, and temperature inversions are weak and elevated. In the radiatively clear state, strong surface radiative cooling leads to the build-up of surface-based temperature inversions. Many large-scale models lack the cloudy state, and some substantially underestimate inversion strength in the clear state. Themore » transformation from a moist to a cold dry air mass is modeled using an idealized Lagrangian perspective. The trajectory includes both boundary layer states, and the single-column experiment is the first Lagrangian Arctic air formation experiment (Larcform 1) organized within GEWEX GASS (Global atmospheric system studies). The intercomparison reproduces the typical biases of large-scale models: some models lack the cloudy state of the boundary layer due to the representation of mixed-phase microphysics or to the interaction between micro- and macrophysics. In some models, high emissivities of ice clouds or the lack of an insulating snow layer prevent the build-up of surface-based inversions in the radiatively clear state. Models substantially disagree on the amount of cloud liquid water in the cloudy state and on turbulent heat fluxes under clear skies. Finally, observations of air mass transformations including both boundary layer states would allow for a tighter constraint of model behavior.« less

  11. Select strengths and biases of models in representing the Arctic winter boundary layer over sea ice: the Larcform 1 single column model intercomparison

    PubMed Central

    Pithan, Felix; Ackerman, Andrew; Angevine, Wayne M.; Hartung, Kerstin; Ickes, Luisa; Kelley, Maxwell; Medeiros, Brian; Sandu, Irina; Steeneveld, Gert-Jan; Sterk, HAM; Svensson, Gunilla; Vaillancourt, Paul A.; Zadra, Ayrton

    2017-01-01

    Weather and climate models struggle to represent lower tropospheric temperature and moisture profiles and surface fluxes in Arctic winter, partly because they lack or misrepresent physical processes that are specific to high latitudes. Observations have revealed two preferred states of the Arctic winter boundary layer. In the cloudy state, cloud liquid water limits surface radiative cooling, and temperature inversions are weak and elevated. In the radiatively clear state, strong surface radiative cooling leads to the build-up of surface-based temperature inversions. Many large-scale models lack the cloudy state, and some substantially underestimate inversion strength in the clear state. Here, the transformation from a moist to a cold dry air mass is modelled using an idealized Lagrangian perspective. The trajectory includes both boundary layer states, and the single-column experiment is the first Lagrangian Arctic air formation experiment (Larcform 1) organized within GEWEX GASS (Global atmospheric system studies). The intercomparison reproduces the typical biases of large-scale models: Some models lack the cloudy state of the boundary layer due to the representation of mixed-phase micro-physics or to the interaction between micro-and macrophysics. In some models, high emissivities of ice clouds or the lack of an insulating snow layer prevent the build-up of surface-based inversions in the radiatively clear state. Models substantially disagree on the amount of cloud liquid water in the cloudy state and on turbulent heat fluxes under clear skies. Observations of air mass transformations including both boundary layer states would allow for a tighter constraint of model behaviour. PMID:28966718

  12. A cloudiness transition in a marine boundary layer

    NASA Technical Reports Server (NTRS)

    Betts, Alan K.; Boers, Reinout

    1990-01-01

    Boundary layer cloudiness plays several important roles in the energy budget of the earth. Low level stratocumulus are highly reflective clouds which reduce the net incoming shortwave radiation at the earth's surface. Climatically, the transition to a small area fraction of scattered cumulus clouds occurs as the air flows over warmer water. Although these clouds reflect less sunlight, they still play an important role in the boundary layer equilibrium by transporting water vapor upwards, and enhancing the surface evaporation. The First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment (FIRE) included a marine stratocumulus experiment off the southern California coast from June 29 to July 19, 1987. The objectives of this experiment were to study the controls on fractional cloudiness, and to assess the role of cloud-top entrainment instability (CTEI) and mesoscale structure in determining cloud type. The focus is one research day, July 7, 1987, when coordinated aircraft missions were flown by four research aircraft, centered on a LANDSAT scene at 1830 UTC. The remarkable feature of this LANDSAT scene is the transition from a clear sky in the west through broken cumulus to solid stratocumulus in the east. The dynamic and thermodynamic structure of this transition in cloudiness is analyzed using data from the NCAR Electra. By averaging the aircraft data, the internal structure of the different cloud regimes is documented, and it is shown that the transition between broken cumulus and stratocumulus is associated with a change in structure with respect to the CTEI condition. However, this results not from sea surface temperature changes, but mostly from a transition in the air above the inversion, and the breakup appears to be at a structure on the unstable side of the wet virtual adiabat.

  13. Colonic availability of polyphenols and D-(-)-quinic acid after apple smoothie consumption.

    PubMed

    Hagl, Stephanie; Deusser, Hannah; Soyalan, Buelent; Janzowski, Christine; Will, Frank; Dietrich, Helmut; Albert, Franz Werner; Rohner, Simone; Richling, Elke

    2011-03-01

    The aim of this study was to determine the amounts of polyphenols and D-(-)-quinic acid reaching the ileostomy bags of probands (and thus the colon in healthy humans) after ingestion of apple smoothie, a beverage containing 60% cloudy apple juice and 40% apple puree. Ten healthy ileostomy subjects each ingested 0.7 L of apple smoothie (a bottle). Their ileostomy bags were collected directly before and 1, 2, 4, 6 and 8 h after smoothie consumption, and the polyphenol and D-(-)-quinic acid contents of the ileostomy fluids were examined using HPLC-DAD and HPLC-MS/MS. The total polyphenol and D-(-)-quinic acid content of the apple smoothie was determined to be 1955.6±124.6 mg/0.7 L, which is very high compared to cloudy apple juices. The most abundant substances found in the ileostomy bags were oligomeric procyanidins (705.6±197.9 mg), D-(-)-quinic acid (363.4±235.5 mg) and 5-caffeoylquinic acid (76.7±26.8 mg). Overall recovery of ingested polyphenols and D-(-)-quinic acid in the ileostomy bags was 63.3±16.1%. The amounts of polyphenol and D-(-)-quinic acids reaching the ileostomy bags are considerably higher after apple smoothie consumption than after the consumption of cloudy apple juice or cider. These results suggest that the food matrix might affect the colonic availability of polyphenols, and apple smoothies could be more effective in the prevention of chronic colon diseases than both cloudy apple juice and apple cider. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. How well does the Rayleigh model describe the E-vector distribution of skylight in clear and cloudy conditions? A full-sky polarimetric study.

    PubMed

    Suhai, Bence; Horváth, Gábor

    2004-09-01

    We present the first high-resolution maps of Rayleigh behavior in clear and cloudy sky conditions measured by full-sky imaging polarimetry at the wavelengths of 650 nm (red), 550 nm (green), and 450 nm (blue) versus the solar elevation angle thetas. Our maps display those celestial areas at which the deviation deltaalpha = /alphameas - alphaRyleigh/ is below the threshold alphathres = 5 degrees, where alphameas is the angle of polarization of skylight measured by full-sky imaging polarimetry, and alphaRayleigh is the celestial angle of polarization calculated on the basis of the single-scattering Rayleigh model. From these maps we derived the proportion r of the full sky for which the single-scattering Rayleigh model describes well (with an accuracy of deltaalpha = 5 degrees) the E-vector alignment of skylight. Depending on thetas, r is high for clear skies, especially for low solar elevations (40% < r < 70% for thetas < or = 13 degrees). Depending on the cloud cover and the solar illumination, r decreases more or less under cloudy conditions, but sometimes its value remains remarkably high, especially at low solar elevations (rmax = 69% for thetas = 0 degrees). The proportion r of the sky that follows the Rayleigh model is usually higher for shorter wavelengths under clear as well as cloudy sky conditions. This partly explains why the shorter wavelengths are generally preferred by animals navigating by means of the celestial polarization. We found that the celestial E-vector pattern generally follows the Rayleigh pattern well, which is a fundamental hypothesis in the studies of animal orientation and human navigation (e.g., in aircraft flying near the geomagnetic poles and using a polarization sky compass) with the use of the celestial alpha pattern.

  15. Select strengths and biases of models in representing the Arctic winter boundary layer over sea ice: the Larcform 1 single column model intercomparison.

    PubMed

    Pithan, Felix; Ackerman, Andrew; Angevine, Wayne M; Hartung, Kerstin; Ickes, Luisa; Kelley, Maxwell; Medeiros, Brian; Sandu, Irina; Steeneveld, Gert-Jan; Sterk, Ham; Svensson, Gunilla; Vaillancourt, Paul A; Zadra, Ayrton

    2016-09-01

    Weather and climate models struggle to represent lower tropospheric temperature and moisture profiles and surface fluxes in Arctic winter, partly because they lack or misrepresent physical processes that are specific to high latitudes. Observations have revealed two preferred states of the Arctic winter boundary layer. In the cloudy state, cloud liquid water limits surface radiative cooling, and temperature inversions are weak and elevated. In the radiatively clear state, strong surface radiative cooling leads to the build-up of surface-based temperature inversions. Many large-scale models lack the cloudy state, and some substantially underestimate inversion strength in the clear state. Here, the transformation from a moist to a cold dry air mass is modelled using an idealized Lagrangian perspective. The trajectory includes both boundary layer states, and the single-column experiment is the first L agrangian Arc tic air form ation experiment (Larcform 1) organized within GEWEX GASS (Global atmospheric system studies). The intercomparison reproduces the typical biases of large-scale models: Some models lack the cloudy state of the boundary layer due to the representation of mixed-phase micro-physics or to the interaction between micro-and macrophysics. In some models, high emissivities of ice clouds or the lack of an insulating snow layer prevent the build-up of surface-based inversions in the radiatively clear state. Models substantially disagree on the amount of cloud liquid water in the cloudy state and on turbulent heat fluxes under clear skies. Observations of air mass transformations including both boundary layer states would allow for a tighter constraint of model behaviour.

  16. Preliminary Analysis of the Transient Reactor Test Facility (TREAT) with PROTEUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connaway, H. M.; Lee, C. H.

    The neutron transport code PROTEUS has been used to perform preliminary simulations of the Transient Reactor Test Facility (TREAT). TREAT is an experimental reactor designed for the testing of nuclear fuels and other materials under transient conditions. It operated from 1959 to 1994, when it was placed on non-operational standby. The restart of TREAT to support the U.S. Department of Energy’s resumption of transient testing is currently underway. Both single assembly and assembly-homogenized full core models have been evaluated. Simulations were performed using a historic set of WIMS-ANL-generated cross-sections as well as a new set of Serpent-generated cross-sections. To supportmore » this work, further analyses were also performed using additional codes in order to investigate particular aspects of TREAT modeling. DIF3D and the Monte-Carlo codes MCNP and Serpent were utilized in these studies. MCNP and Serpent were used to evaluate the effect of geometry homogenization on the simulation results and to support code-to-code comparisons. New meshes for the PROTEUS simulations were created using the CUBIT toolkit, with additional meshes generated via conversion of selected DIF3D models to support code-to-code verifications. All current analyses have focused on code-to-code verifications, with additional verification and validation studies planned. The analysis of TREAT with PROTEUS-SN is an ongoing project. This report documents the studies that have been performed thus far, and highlights key challenges to address in future work.« less

  17. A hybrid gyrokinetic ion and isothermal electron fluid code for astrophysical plasma

    NASA Astrophysics Data System (ADS)

    Kawazura, Y.; Barnes, M.

    2018-05-01

    This paper describes a new code for simulating astrophysical plasmas that solves a hybrid model composed of gyrokinetic ions (GKI) and an isothermal electron fluid (ITEF) Schekochihin et al. (2009) [9]. This model captures ion kinetic effects that are important near the ion gyro-radius scale while electron kinetic effects are ordered out by an electron-ion mass ratio expansion. The code is developed by incorporating the ITEF approximation into AstroGK, an Eulerian δf gyrokinetics code specialized to a slab geometry Numata et al. (2010) [41]. The new code treats the linear terms in the ITEF equations implicitly while the nonlinear terms are treated explicitly. We show linear and nonlinear benchmark tests to prove the validity and applicability of the simulation code. Since the fast electron timescale is eliminated by the mass ratio expansion, the Courant-Friedrichs-Lewy condition is much less restrictive than in full gyrokinetic codes; the present hybrid code runs ∼ 2√{mi /me } ∼ 100 times faster than AstroGK with a single ion species and kinetic electrons where mi /me is the ion-electron mass ratio. The improvement of the computational time makes it feasible to execute ion scale gyrokinetic simulations with a high velocity space resolution and to run multiple simulations to determine the dependence of turbulent dynamics on parameters such as electron-ion temperature ratio and plasma beta.

  18. Particle kinetic simulation of high altitude hypervelocity flight

    NASA Technical Reports Server (NTRS)

    Boyd, Iain; Haas, Brian L.

    1994-01-01

    Rarefied flows about hypersonic vehicles entering the upper atmosphere or through nozzles expanding into a near vacuum may only be simulated accurately with a direct simulation Monte Carlo (DSMC) method. Under this grant, researchers enhanced the models employed in the DSMC method and performed simulations in support of existing NASA projects or missions. DSMC models were developed and validated for simulating rotational, vibrational, and chemical relaxation in high-temperature flows, including effects of quantized anharmonic oscillators and temperature-dependent relaxation rates. State-of-the-art advancements were made in simulating coupled vibration-dissociation recombination for post-shock flows. Models were also developed to compute vehicle surface temperatures directly in the code rather than requiring isothermal estimates. These codes were instrumental in simulating aerobraking of NASA's Magellan spacecraft during orbital maneuvers to assess heat transfer and aerodynamic properties of the delicate satellite. NASA also depended upon simulations of entry of the Galileo probe into the atmosphere of Jupiter to provide drag and flow field information essential for accurate interpretation of an onboard experiment. Finally, the codes have been used extensively to simulate expanding nozzle flows in low-power thrusters in support of propulsion activities at NASA-Lewis. Detailed comparisons between continuum calculations and DSMC results helped to quantify the limitations of continuum CFD codes in rarefied applications.

  19. Metrics for comparing dynamic earthquake rupture simulations

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2014-01-01

    Earthquakes are complex events that involve a myriad of interactions among multiple geologic features and processes. One of the tools that is available to assist with their study is computer simulation, particularly dynamic rupture simulation. A dynamic rupture simulation is a numerical model of the physical processes that occur during an earthquake. Starting with the fault geometry, friction constitutive law, initial stress conditions, and assumptions about the condition and response of the near‐fault rocks, a dynamic earthquake rupture simulation calculates the evolution of fault slip and stress over time as part of the elastodynamic numerical solution (Ⓔ see the simulation description in the electronic supplement to this article). The complexity of the computations in a dynamic rupture simulation make it challenging to verify that the computer code is operating as intended, because there are no exact analytic solutions against which these codes’ results can be directly compared. One approach for checking if dynamic rupture computer codes are working satisfactorily is to compare each code’s results with the results of other dynamic rupture codes running the same earthquake simulation benchmark. To perform such a comparison consistently, it is necessary to have quantitative metrics. In this paper, we present a new method for quantitatively comparing the results of dynamic earthquake rupture computer simulation codes.

  20. Analyzing simulation-based PRA data through traditional and topological clustering: A BWR station blackout case study

    DOE PAGES

    Maljovec, D.; Liu, S.; Wang, B.; ...

    2015-07-14

    Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less

  1. Electro-Thermal-Mechanical Simulation Capability Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, D

    This is the Final Report for LDRD 04-ERD-086, 'Electro-Thermal-Mechanical Simulation Capability'. The accomplishments are well documented in five peer-reviewed publications and six conference presentations and hence will not be detailed here. The purpose of this LDRD was to research and develop numerical algorithms for three-dimensional (3D) Electro-Thermal-Mechanical simulations. LLNL has long been a world leader in the area of computational mechanics, and recently several mechanics codes have become 'multiphysics' codes with the addition of fluid dynamics, heat transfer, and chemistry. However, these multiphysics codes do not incorporate the electromagnetics that is required for a coupled Electro-Thermal-Mechanical (ETM) simulation. There aremore » numerous applications for an ETM simulation capability, such as explosively-driven magnetic flux compressors, electromagnetic launchers, inductive heating and mixing of metals, and MEMS. A robust ETM simulation capability will enable LLNL physicists and engineers to better support current DOE programs, and will prepare LLNL for some very exciting long-term DoD opportunities. We define a coupled Electro-Thermal-Mechanical (ETM) simulation as a simulation that solves, in a self-consistent manner, the equations of electromagnetics (primarily statics and diffusion), heat transfer (primarily conduction), and non-linear mechanics (elastic-plastic deformation, and contact with friction). There is no existing parallel 3D code for simulating ETM systems at LLNL or elsewhere. While there are numerous magnetohydrodynamic codes, these codes are designed for astrophysics, magnetic fusion energy, laser-plasma interaction, etc. and do not attempt to accurately model electromagnetically driven solid mechanics. This project responds to the Engineering R&D Focus Areas of Simulation and Energy Manipulation, and addresses the specific problem of Electro-Thermal-Mechanical simulation for design and analysis of energy manipulation systems such as magnetic flux compression generators and railguns. This project compliments ongoing DNT projects that have an experimental emphasis. Our research efforts have been encapsulated in the Diablo and ALE3D simulation codes. This new ETM capability already has both internal and external users, and has spawned additional research in plasma railgun technology. By developing this capability Engineering has become a world-leader in ETM design, analysis, and simulation. This research has positioned LLNL to be able to compete for new business opportunities with the DoD in the area of railgun design. We currently have a three-year $1.5M project with the Office of Naval Research to apply our ETM simulation capability to railgun bore life issues and we expect to be a key player in the railgun community.« less

  2. A Proposal of Monitoring and Forecasting Method for Crustal Activity in and around Japan with 3-dimensional Heterogeneous Medium Using a Large-scale High-fidelity Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Hori, T.; Agata, R.; Ichimura, T.; Fujita, K.; Yamaguchi, T.; Takahashi, N.

    2017-12-01

    Recently, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. For construct a system for monitoring and forecasting, it is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate inter-face and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Unstructured FE non-linear seismic wave simulation code has been developed. This achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. A high fidelity FEM simulation code with mesh generator has also been developed to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. This code has been improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, waveform inversion code for modeling 3D crustal structure has been developed, and the high-fidelity FEM code has been improved to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. We are developing the methods for forecasting the slip velocity variation on the plate interface. Although the prototype is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model. Furthermore, large-scale simulation codes for monitoring are being implemented on the GPU clusters and analysis tools are developing to include other functions such as examination in model errors.

  3. LOOPREF: A Fluid Code for the Simulation of Coronal Loops

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda; Antiochos, Spiro; Spicer, Daniel

    1998-01-01

    This report documents the code LOOPREF. LOOPREF is a semi-one dimensional finite element code that is especially well suited to simulate coronal-loop phenomena. It has a full implementation of adaptive mesh refinement (AMR), which is crucial for this type of simulation. The AMR routines are an improved version of AMR1D. LOOPREF's versatility makes is suitable to simulate a wide variety of problems. In addition to efficiently providing very high resolution in rapidly changing regions of the domain, it is equipped to treat loops of variable cross section, any non-linear form of heat conduction, shocks, gravitational effects, and radiative loss.

  4. Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole

    2017-03-01

    The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.

  5. An evaluation of TRAC-PF1/MOD1 computer code performance during posttest simulations of Semiscale MOD-2C feedwater line break transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, D.G.: Watkins, J.C.

    This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less

  6. Coupled field effects in BWR stability simulations using SIMULATE-3K

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borkowski, J.; Smith, K.; Hagrman, D.

    1996-12-31

    The SIMULATE-3K code is the transient analysis version of the Studsvik advanced nodal reactor analysis code, SIMULATE-3. Recent developments have focused on further broadening the range of transient applications by refinement of core thermal-hydraulic models and on comparison with boiling water reactor (BWR) stability measurements performed at Ringhals unit 1, during the startups of cycles 14 through 17.

  7. Dynamic mineral clouds on HD 189733b. II. Monte Carlo radiative transfer for 3D cloudy exoplanet atmospheres: combining scattering and emission spectra

    NASA Astrophysics Data System (ADS)

    Lee, G. K. H.; Wood, K.; Dobbs-Dixon, I.; Rice, A.; Helling, Ch.

    2017-05-01

    Context. As the 3D spatial properties of exoplanet atmospheres are being observed in increasing detail by current and new generations of telescopes, the modelling of the 3D scattering effects of cloud forming atmospheres with inhomogeneous opacity structures becomes increasingly important to interpret observational data. Aims: We model the scattering and emission properties of a simulated cloud forming, inhomogeneous opacity, hot Jupiter atmosphere of HD 189733b. We compare our results to available Hubble Space Telescope (HST) and Spitzer data and quantify the effects of 3D multiple scattering on observable properties of the atmosphere. We discuss potential observational properties of HD 189733b for the upcoming Transiting Exoplanet Survey Satellite (TESS) and CHaracterising ExOPlanet Satellite (CHEOPS) missions. Methods: We developed a Monte Carlo radiative transfer code and applied it to post-process output of our 3D radiative-hydrodynamic, cloud formation simulation of HD 189733b. We employed three variance reduction techniques, I.e. next event estimation, survival biasing, and composite emission biasing, to improve signal to noise of the output. For cloud particle scattering events, we constructed a log-normal area distribution from the 3D cloud formation radiative-hydrodynamic results, which is stochastically sampled in order to model the Rayleigh and Mie scattering behaviour of a mixture of grain sizes. Results: Stellar photon packets incident on the eastern dayside hemisphere show predominantly Rayleigh, single-scattering behaviour, while multiple scattering occurs on the western hemisphere. Combined scattered and thermal emitted light predictions are consistent with published HST and Spitzer secondary transit observations. Our model predictions are also consistent with geometric albedo constraints from optical wavelength ground-based polarimetry and HST B band measurements. We predict an apparent geometric albedo for HD 189733b of 0.205 and 0.229, in the TESS and CHEOPS photometric bands respectively. Conclusions: Modelling the 3D geometric scattering effects of clouds on observables of exoplanet atmospheres provides an important contribution to the attempt to determine the cloud properties of these objects. Comparisons between TESS and CHEOPS photometry may provide qualitative information on the cloud properties of nearby hot Jupiter exoplanets.

  8. Design of orbital debris shields for oblique hypervelocity impact

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    1994-01-01

    A new impact debris propagation code was written to link CTH simulations of space debris shield perforation to the Lagrangian finite element code DYNA3D, for space structure wall impact simulations. This software (DC3D) simulates debris cloud evolution using a nonlinear elastic-plastic deformable particle dynamics model, and renders computationally tractable the supercomputer simulation of oblique impacts on Whipple shield protected structures. Comparison of three dimensional, oblique impact simulations with experimental data shows good agreement over a range of velocities of interest in the design of orbital debris shielding. Source code developed during this research is provided on the enclosed floppy disk. An abstract based on the work described was submitted to the 1994 Hypervelocity Impact Symposium.

  9. Modeling and Simulation of Explosively Driven Electromechanical Devices

    NASA Astrophysics Data System (ADS)

    Demmie, Paul N.

    2002-07-01

    Components that store electrical energy in ferroelectric materials and produce currents when their permittivity is explosively reduced are used in a variety of applications. The modeling and simulation of such devices is a challenging problem since one has to represent the coupled physics of detonation, shock propagation, and electromagnetic field generation. The high fidelity modeling and simulation of complicated electromechanical devices was not feasible prior to having the Accelerated Strategic Computing Initiative (ASCI) computers and the ASCI developed codes at Sandia National Laboratories (SNL). The EMMA computer code is used to model such devices and simulate their operation. In this paper, I discuss the capabilities of the EMMA code for the modeling and simulation of one such electromechanical device, a slim-loop ferroelectric (SFE) firing set.

  10. Computer and laboratory simulation of interactions between spacecraft surfaces and charged-particle environments

    NASA Technical Reports Server (NTRS)

    Stevens, N. J.

    1979-01-01

    Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.

  11. SAC: Sheffield Advanced Code

    NASA Astrophysics Data System (ADS)

    Griffiths, Mike; Fedun, Viktor; Mumford, Stuart; Gent, Frederick

    2013-06-01

    The Sheffield Advanced Code (SAC) is a fully non-linear MHD code designed for simulations of linear and non-linear wave propagation in gravitationally strongly stratified magnetized plasma. It was developed primarily for the forward modelling of helioseismological processes and for the coupling processes in the solar interior, photosphere, and corona; it is built on the well-known VAC platform that allows robust simulation of the macroscopic processes in gravitationally stratified (non-)magnetized plasmas. The code has no limitations of simulation length in time imposed by complications originating from the upper boundary, nor does it require implementation of special procedures to treat the upper boundaries. SAC inherited its modular structure from VAC, thereby allowing modification to easily add new physics.

  12. Large Eddy Simulations and Turbulence Modeling for Film Cooling

    NASA Technical Reports Server (NTRS)

    Acharya, Sumanta

    1999-01-01

    The objective of the research is to perform Direct Numerical Simulations (DNS) and Large Eddy Simulations (LES) for film cooling process, and to evaluate and improve advanced forms of the two equation turbulence models for turbine blade surface flow analysis. The DNS/LES were used to resolve the large eddies within the flow field near the coolant jet location. The work involved code development and applications of the codes developed to the film cooling problems. Five different codes were developed and utilized to perform this research. This report presented a summary of the development of the codes and their applications to analyze the turbulence properties at locations near coolant injection holes.

  13. Collaborative Simulation Grid: Multiscale Quantum-Mechanical/Classical Atomistic Simulations on Distributed PC Clusters in the US and Japan

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash; hide

    2002-01-01

    A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.

  14. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  15. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.

    2016-01-12

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components producemore » first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.« less

  16. Finite element methods in a simulation code for offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Kurz, Wolfgang

    1994-06-01

    Offshore installation of wind turbines will become important for electricity supply in future. Wind conditions above sea are more favorable than on land and appropriate locations on land are limited and restricted. The dynamic behavior of advanced wind turbines is investigated with digital simulations to reduce time and cost in development and design phase. A wind turbine can be described and simulated as a multi-body system containing rigid and flexible bodies. Simulation of the non-linear motion of such a mechanical system using a multi-body system code is much faster than using a finite element code. However, a modal representation of the deformation field has to be incorporated in the multi-body system approach. The equations of motion of flexible bodies due to deformation are generated by finite element calculations. At Delft University of Technology the simulation code DUWECS has been developed which simulates the non-linear behavior of wind turbines in time domain. The wind turbine is divided in subcomponents which are represented by modules (e.g. rotor, tower etc.).

  17. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    NASA Astrophysics Data System (ADS)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  18. MuSim, a Graphical User Interface for Multiple Simulation Programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roberts, Thomas; Cummings, Mary Anne; Johnson, Rolland

    2016-06-01

    MuSim is a new user-friendly program designed to interface to many different particle simulation codes, regardless of their data formats or geometry descriptions. It presents the user with a compelling graphical user interface that includes a flexible 3-D view of the simulated world plus powerful editing and drag-and-drop capabilities. All aspects of the design can be parametrized so that parameter scans and optimizations are easy. It is simple to create plots and display events in the 3-D viewer (with a slider to vary the transparency of solids), allowing for an effortless comparison of different simulation codes. Simulation codes: G4beamline, MAD-X,more » and MCNP; more coming. Many accelerator design tools and beam optics codes were written long ago, with primitive user interfaces by today's standards. MuSim is specifically designed to make it easy to interface to such codes, providing a common user experience for all, and permitting the construction and exploration of models with very little overhead. For today's technology-driven students, graphical interfaces meet their expectations far better than text-based tools, and education in accelerator physics is one of our primary goals.« less

  19. Simulation studies of chemical erosion on carbon based materials at elevated temperatures

    NASA Astrophysics Data System (ADS)

    Kenmotsu, T.; Kawamura, T.; Li, Zhijie; Ono, T.; Yamamura, Y.

    1999-06-01

    We simulated the fluence dependence of methane reaction yield in carbon with hydrogen bombardment using the ACAT-DIFFUSE code. The ACAT-DIFFUSE code is a simulation code based on a Monte Carlo method with a binary collision approximation and on solving diffusion equations. The chemical reaction model in carbon was studied by Roth or other researchers. Roth's model is suitable for the steady state methane reaction. But this model cannot estimate the fluence dependence of the methane reaction. Then, we derived an empirical formula based on Roth's model for methane reaction. In this empirical formula, we assumed the reaction region where chemical sputtering due to methane formation takes place. The reaction region corresponds to the peak range of incident hydrogen distribution in the target material. We adopted this empirical formula to the ACAT-DIFFUSE code. The simulation results indicate the similar fluence dependence compared with the experiment result. But, the fluence to achieve the steady state are different between experiment and simulation results.

  20. Computational methods for coupling microstructural and micromechanical materials response simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.

    2000-04-01

    Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were appliedmore » to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.« less

  1. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  2. Efficient Modeling of Laser-Plasma Accelerators with INF&RNO

    NASA Astrophysics Data System (ADS)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.; Geddes, C. G. R.; Leemans, W. P.

    2010-11-01

    The numerical modeling code INF&RNO (INtegrated Fluid & paRticle simulatioN cOde, pronounced "inferno") is presented. INF&RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations while still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.

  3. Micromagnetic Code Development of Advanced Magnetic Structures Final Report CRADA No. TC-1561-98

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerjan, Charles J.; Shi, Xizeng

    The specific goals of this project were to: Further develop the previously written micromagnetic code DADIMAG (DOE code release number 980017); Validate the code. The resulting code was expected to be more realistic and useful for simulations of magnetic structures of specific interest to Read-Rite programs. We also planned to further the code for use in internal LLNL programs. This project complemented LLNL CRADA TC-840-94 between LLNL and Read-Rite, which allowed for simulations of the advanced magnetic head development completed under the CRADA. TC-1561-98 was effective concurrently with LLNL non-exclusive copyright license (TL-1552-98) to Read-Rite for DADIMAG Version 2 executablemore » code.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold H. Kritz

    PTRANSP, which is the predictive version of the TRANSP code, was developed in a collaborative effort involving the Princeton Plasma Physics Laboratory, General Atomics Corporation, Lawrence Livermore National Laboratory, and Lehigh University. The PTRANSP/TRANSP suite of codes is the premier integrated tokamak modeling software in the United States. A production service for PTRANSP/TRANSP simulations is maintained at the Princeton Plasma Physics Laboratory; the server has a simple command line client interface and is subscribed to by about 100 researchers from tokamak projects in the US, Europe, and Asia. This service produced nearly 13000 PTRANSP/TRANSP simulations in the four year periodmore » FY 2005 through FY 2008. Major archives of TRANSP results are maintained at PPPL, MIT, General Atomics, and JET. Recent utilization, counting experimental analysis simulations as well as predictive simulations, more than doubled from slightly over 2000 simulations per year in FY 2005 and FY 2006 to over 4300 simulations per year in FY 2007 and FY 2008. PTRANSP predictive simulations applied to ITER increased eight fold from 30 simulations per year in FY 2005 and FY 2006 to 240 simulations per year in FY 2007 and FY 2008, accounting for more than half of combined PTRANSP/TRANSP service CPU resource utilization in FY 2008. PTRANSP studies focused on ITER played a key role in journal articles. Examples of validation studies carried out for momentum transport in PTRANSP simulations were presented at the 2008 IAEA conference. The increase in number of PTRANSP simulations has continued (more than 7000 TRANSP/PTRANSP simulations in 2010) and results of PTRANSP simulations appear in conference proceedings, for example the 2010 IAEA conference, and in peer reviewed papers. PTRANSP provides a bridge to the Fusion Simulation Program (FSP) and to the future of integrated modeling. Through years of widespread usage, each of the many parts of the PTRANSP suite of codes has been thoroughly validated against experimental data and benchmarked against other codes. At the same time, architectural modernizations are improving the modularity of the PTRANSP code base. The NUBEAM neutral beam and fusion products fast ion model, the Plasma State data repository (developed originally in the SWIM SciDAC project and adapted for use in PTRANSP), and other components are already shared with the SWIM, FACETS, and CPES SciDAC FSP prototype projects. Thus, the PTRANSP code is already serving as a bridge between our present integrated modeling capability and future capability. As the Fusion Simulation Program builds toward the facility currently available in the PTRANSP suite of codes, early versions of the FSP core plasma model will need to be benchmarked against the PTRANSP simulations. This will be necessary to build user confidence in FSP, but this benchmarking can only be done if PTRANSP itself is maintained and developed.« less

  5. Integrated Devices and Systems | Grid Modernization | NREL

    Science.gov Websites

    storage models Microgrids Microgrids Grid Simulation and Power Hardware-in-the-Loop Grid simulation and power hardware-in-the-loop Grid Standards and Codes Standards and codes Contact Barry Mather, Ph.D

  6. MicroHH 1.0: a computational fluid dynamics code for direct numerical simulation and large-eddy simulation of atmospheric boundary layer flows

    NASA Astrophysics Data System (ADS)

    van Heerwaarden, Chiel C.; van Stratum, Bart J. H.; Heus, Thijs; Gibbs, Jeremy A.; Fedorovich, Evgeni; Mellado, Juan Pedro

    2017-08-01

    This paper describes MicroHH 1.0, a new and open-source (www.microhh.org) computational fluid dynamics code for the simulation of turbulent flows in the atmosphere. It is primarily made for direct numerical simulation but also supports large-eddy simulation (LES). The paper covers the description of the governing equations, their numerical implementation, and the parameterizations included in the code. Furthermore, the paper presents the validation of the dynamical core in the form of convergence and conservation tests, and comparison of simulations of channel flows and slope flows against well-established test cases. The full numerical model, including the associated parameterizations for LES, has been tested for a set of cases under stable and unstable conditions, under the Boussinesq and anelastic approximations, and with dry and moist convection under stationary and time-varying boundary conditions. The paper presents performance tests showing good scaling from 256 to 32 768 processes. The graphical processing unit (GPU)-enabled version of the code can reach a speedup of more than an order of magnitude for simulations that fit in the memory of a single GPU.

  7. electromagnetics, eddy current, computer codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gartling, David

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  8. Validation of a Monte Carlo code system for grid evaluation with interference effect on Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Zhou, Abel; White, Graeme L.; Davidson, Rob

    2018-02-01

    Anti-scatter grids are commonly used in x-ray imaging systems to reduce scatter radiation reaching the image receptor. Anti-scatter grid performance and validation can be simulated through use of Monte Carlo (MC) methods. Our recently reported work has modified existing MC codes resulting in improved performance when simulating x-ray imaging. The aim of this work is to validate the transmission of x-ray photons in grids from the recently reported new MC codes against experimental results and results previously reported in other literature. The results of this work show that the scatter-to-primary ratio (SPR), the transmissions of primary (T p), scatter (T s), and total (T t) radiation determined using this new MC code system have strong agreement with the experimental results and the results reported in the literature. T p, T s, T t, and SPR determined in this new MC simulation code system are valid. These results also show that the interference effect on Rayleigh scattering should not be neglected in both mammographic and general grids’ evaluation. Our new MC simulation code system has been shown to be valid and can be used for analysing and evaluating the designs of grids.

  9. Recent Progress and Future Plans for Fusion Plasma Synthetic Diagnostics Platform

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Kramer, Gerrit; Tang, William; Tobias, Benjamin; Valeo, Ernest; Churchill, Randy; Hausammann, Loic

    2015-11-01

    The Fusion Plasma Synthetic Diagnostics Platform (FPSDP) is a Python package developed at the Princeton Plasma Physics Laboratory. It is dedicated to providing an integrated programmable environment for applying a modern ensemble of synthetic diagnostics to the experimental validation of fusion plasma simulation codes. The FPSDP will allow physicists to directly compare key laboratory measurements to simulation results. This enables deeper understanding of experimental data, more realistic validation of simulation codes, quantitative assessment of existing diagnostics, and new capabilities for the design and optimization of future diagnostics. The Fusion Plasma Synthetic Diagnostics Platform now has data interfaces for the GTS and XGC-1 global particle-in-cell simulation codes with synthetic diagnostic modules including: (i) 2D and 3D Reflectometry; (ii) Beam Emission Spectroscopy; and (iii) 1D Electron Cyclotron Emission. Results will be reported on the delivery of interfaces for the global electromagnetic PIC code GTC, the extended MHD M3D-C1 code, and the electromagnetic hybrid NOVAK eigenmode code. Progress toward development of a more comprehensive 2D Electron Cyclotron Emission module will also be discussed. This work is supported by DOE contract #DEAC02-09CH11466.

  10. Parser for Sabin-to-Mahoney Transition Model of Quasispecies Replication

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ecale Zhou, Carol

    2016-01-03

    This code is a data parse for preparing output from the Qspp agent-based stochastic simulation model for plotting in Excel. This code is specific to a set of simulations that were run for the purpose of preparing data for a publication. It is necessary to make this code open-source in order to publish the model code (Qspp), which has already been released. There is a necessity of assuring that results from using Qspp for a publication

  11. Capabilities overview of the MORET 5 Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Cochet, B.; Jinaphanh, A.; Heulers, L.; Jacquet, O.

    2014-06-01

    The MORET code is a simulation tool that solves the transport equation for neutrons using the Monte Carlo method. It allows users to model complex three-dimensional geometrical configurations, describe the materials, define their own tallies in order to analyse the results. The MORET code has been initially designed to perform calculations for criticality safety assessments. New features has been introduced in the MORET 5 code to expand its use for reactor applications. This paper presents an overview of the MORET 5 code capabilities, going through the description of materials, the geometry modelling, the transport simulation and the definition of the outputs.

  12. LDPC Codes with Minimum Distance Proportional to Block Size

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.

  13. Nexus: a modular workflow management system for quantum simulation codes

    DOE PAGES

    Krogel, Jaron T.

    2015-08-24

    The management of simulation workflows is a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantummore » chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.« less

  14. Global linear gyrokinetic simulations for LHD including collisions

    NASA Astrophysics Data System (ADS)

    Kauffmann, K.; Kleiber, R.; Hatzky, R.; Borchardt, M.

    2010-11-01

    The code EUTERPE uses a Particle-In-Cell (PIC) method to solve the gyrokinetic equation globally (full radius, full flux surface) for three-dimensional equilibria calculated with VMEC. Recently this code has been extended to include multiple kinetic species and electromagnetic effects. Additionally, a pitch-angle scattering operator has been implemented in order to include collisional effects in the simulation of instabilities and to be able to simulate neoclassical transport. As a first application of this extended code we study the effects of collisions on electrostatic ion-temperature-gradient (ITG) instabilities in LHD.

  15. Simulations of the plasma dynamics in high-current ion diodes

    NASA Astrophysics Data System (ADS)

    Boine-Frankenheim, O.; Pointon, T. D.; Mehlhorn, T. A.

    Our time-implicit fluid/Particle-In-Cell (PIC) code DYNAID [1]is applied to problems relevant for applied- B ion diode operation. We present simulations of the laser ion source, which will soon be employed on the SABRE accelerator at SNL, and of the dynamics of the anode source plasma in the applied electric and magnetic fields. DYNAID is still a test-bed for a higher-dimensional simulation code. Nevertheless, the code can already give new theoretical insight into the dynamics of plasmas in pulsed power devices.

  16. FIRE aircraft observations of horizontal and vertical transport in marine stratocumulus

    NASA Technical Reports Server (NTRS)

    Paluch, Ilga R.; Lenschow, Donald H.

    1990-01-01

    A major goal of research on marine stratocumulus is to try to understand the processes that generate and dissipate them. One approach to studying this problem is to investigate the boundary layer structure in the vicinity of a transition from a cloudy to a cloud-free region to document the differences in structure on each side of the transition. Since stratiform clouds have a major impact on the radiation divergence in the boundary layer, the transition from a cloudy to a clear boundary layer is a region of large horizontal inhomogeneity in air temperature and turbulence intensity. This leads to a considerable difference in horizontal and vertical transports between the cloudy and cloud-free regions. Measurements are used from the NCAR Electra aircraft during flights 5 (7 July 1987) and 10 (18 July 1987) of FIRE for this purpose. Flight 5 coincided with a LANDSAT overflight, and was designed to investigate the transition across a well-defined N-S cloud boundary, since the LANDSAT image can document the cloud cover in considerable detail. Turbulence legs were flown about 60 km on both sides of the cloud boundary. Flight 10 was flown at night in an area of scattered small cumuli and broken cloud patches.

  17. Effectiveness of solar disinfection using batch reactors with non-imaging aluminium reflectors under real conditions: Natural well-water and solar light.

    PubMed

    Navntoft, C; Ubomba-Jaswa, E; McGuigan, K G; Fernández-Ibáñez, P

    2008-12-11

    Inactivation kinetics are reported for suspensions of Escherichia coli in well-water using compound parabolic collector (CPC) mirrors to enhance the efficiency of solar disinfection (SODIS) for batch reactors under real, solar radiation (cloudy and cloudless) conditions. On clear days, the system with CPC reflectors achieved complete inactivation (more than 5-log unit reduction in bacterial population to below the detection limit of 4CFU/mL) one hour sooner than the system fitted with no CPC. On cloudy days, only systems fitted with CPCs achieved complete inactivation. Degradation of the mirrors under field conditions was also evaluated. The reflectivity of CPC systems that had been in use outdoors for at least 3 years deteriorated in a non-homogeneous fashion. Reflectivity values for these older systems were found to vary between 27% and 72% compared to uniform values of 87% for new CPC systems. The use of CPC has been proven to be a good technological enhancement to inactivate bacteria under real conditions in clear and cloudy days. A comparison between enhancing optics and thermal effect is also discussed.

  18. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders

    NASA Astrophysics Data System (ADS)

    Rußwurm, Marc; Körner, Marco

    2018-03-01

    Earth observation (EO) sensors deliver data with daily or weekly temporal resolution. Most land use and land cover (LULC) approaches, however, expect cloud-free and mono-temporal observations. The increasing temporal capabilities of today's sensors enables the use of temporal, along with spectral and spatial features. Domains, such as speech recognition or neural machine translation, work with inherently temporal data and, today, achieve impressive results using sequential encoder-decoder structures. Inspired by these sequence-to-sequence models, we adapt an encoder structure with convolutional recurrent layers in order to approximate a phenological model for vegetation classes based on a temporal sequence of Sentinel 2 (S2) images. In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations. Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches. Moreover, using unfiltered temporal series of top-of-atmosphere (TOA) reflectance data, we achieved in our experiments state-of-the-art classification accuracies on a large number of crop classes with minimal preprocessing compared to other classification approaches.

  19. The analysis of polar clouds from AVHRR satellite data using pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Smith, William L.; Ebert, Elizabeth

    1990-01-01

    The cloud cover in a set of summertime and wintertime AVHRR data from the Arctic and Antarctic regions was analyzed using a pattern recognition algorithm. The data were collected by the NOAA-7 satellite on 6 to 13 Jan. and 1 to 7 Jul. 1984 between 60 deg and 90 deg north and south latitude in 5 spectral channels, at the Global Area Coverage (GAC) resolution of approximately 4 km. This data embodied a Polar Cloud Pilot Data Set which was analyzed by a number of research groups as part of a polar cloud algorithm intercomparison study. This study was intended to determine whether the additional information contained in the AVHRR channels (beyond the standard visible and infrared bands on geostationary satellites) could be effectively utilized in cloud algorithms to resolve some of the cloud detection problems caused by low visible and thermal contrasts in the polar regions. The analysis described makes use of a pattern recognition algorithm which estimates the surface and cloud classification, cloud fraction, and surface and cloudy visible (channel 1) albedo and infrared (channel 4) brightness temperatures on a 2.5 x 2.5 deg latitude-longitude grid. In each grid box several spectral and textural features were computed from the calibrated pixel values in the multispectral imagery, then used to classify the region into one of eighteen surface and/or cloud types using the maximum likelihood decision rule. A slightly different version of the algorithm was used for each season and hemisphere because of differences in categories and because of the lack of visible imagery during winter. The classification of the scene is used to specify the optimal AVHRR channel for separating clear and cloudy pixels using a hybrid histogram-spatial coherence method. This method estimates values for cloud fraction, clear and cloudy albedos and brightness temperatures in each grid box. The choice of a class-dependent AVHRR channel allows for better separation of clear and cloudy pixels than does a global choice of a visible and/or infrared threshold. The classification also prevents erroneous estimates of large fractional cloudiness in areas of cloudfree snow and sea ice. The hybrid histogram-spatial coherence technique and the advantages of first classifying a scene in the polar regions are detailed. The complete Polar Cloud Pilot Data Set was analyzed and the results are presented and discussed.

  20. The Q Continuum: Encounter with the Cloud Mask

    NASA Astrophysics Data System (ADS)

    Ackerman, S. A.; Frey, R.; Holz, R.; Philips, C.; Dutcher, S.

    2017-12-01

    We are developing a common cloud mask for MODIS and VIIRS observations, referred to as the MODIS VIIRS Continuity Mask (MVCM). Our focus is on extending the MODIS-heritage cloud detection approach in order to generate appropriate climate data records for clouds and climate studies. The MVCM is based on heritage from the MODIS cloud mask (MOD35 and MYD35) and employs a series of tests on MODIS reflectances and brightness temperatures. Cloud detection is based on contrasts (i.e., cloud versus background surface) at pixel resolution. The MVCM follows the same approach. These cloud masks use multiple cloud detection tests to indicate the confidence level that the observation is of a clear-sky scene. The outcome of a test ranges from 0 (cloudy) to 1 (clear-sky scene). Because of overlap in the sensitivities of the various spectral tests to the type of cloud, each test is considered in one of several groups. The final cloud mask is determined from the product of the minimum confidence of each group and is referred to as the Q value as defined in Ackerman et al (1998). In MOD35 and MYD35 processing, the Q value is not output, rather predetermined Q values determine the result: If Q ≥ .99 the scene is clear; .95 ≤ Q < .99 the pixel is probably a clear scene, .66 ≤ Q < .95 is probably cloudy and Q < .66 is cloudy. Thus representing Q discretely and not as a continuum. For the MVCM, the numerical value of the Q is output along with the classification of clear, probably clear, probably cloudy, and cloudy. Through comparisons with collocated CALIOP and MODIS observations, we will assess the categorization of the Q values as a function of scene type ). While validation studies have indicated the utility and statistical correctness of the cloud mask approach, the algorithm does not possess immeasurable power and perfection. This comparison will assess the time and space dependence of Q and assure that the laws of physics are followed, at least according to normal human notions. Using CALIOP as representing truth, a receiver operating characteristic curve (ROC) will be analyzed to determine the optimum Q for various scenes and seasons, thus providing a continuum of discriminating thresholds.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassilevska, Tanya

    This is the first code, designed to run on a desktop, which models the intracellular replication and the cell-to-cell infection and demonstrates virus evolution at the molecular level. This code simulates the infection of a population of "idealized biological cells" (represented as objects that do not divide or have metabolism) with "virus" (represented by its genetic sequence), the replication and simultaneous mutation of the virus which leads to evolution of the population of genetically diverse viruses. The code is built to simulate single-stranded RNA viruses. The input for the code is 1. the number of biological cells in the culture,more » 2. the initial composition of the virus population, 3. the reference genome of the RNA virus, 4. the coordinates of the genome regions and their significance and, 5. parameters determining the dynamics of virus replication, such as the mutation rate. The simulation ends when all cells have been infected or when no more infections occurs after a given number of attempts. The code has the ability to simulate the evolution of the virus in serial passage of cell "cultures", i.e. after the end of a simulation, a new one is immediately scheduled with a new culture of infected cells. The code outputs characteristics of the resulting virus population dynamics and genetic composition of the virus population, such as the top dominant genomes, percentage of a genome with specific characteristics.« less

  2. Using Large Signal Code TESLA for Wide Band Klystron Simulations

    DTIC Science & Technology

    2006-04-01

    tuning procedure TESLA simulates of high power klystron [3]. accurately actual eigenmodes of the structure as a solution Wide band klystrons very often...on band klystrons with two-gap two-mode resonators. The decomposition of simulation region into an external results of TESLA simulations for NRL S ...UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP022454 TITLE: Using Large Signal Code TESLA for Wide Band Klystron

  3. Aviation Safety Modeling and Simulation (ASMM) Propulsion Fleet Modeling: A Tool for Semi-Automatic Construction of CORBA-based Applications from Legacy Fortran Programs

    NASA Technical Reports Server (NTRS)

    Sang, Janche

    2003-01-01

    Within NASA's Aviation Safety Program, NASA GRC participates in the Modeling and Simulation Project called ASMM. NASA GRC s focus is to characterize the propulsion systems performance from a fleet management and maintenance perspective by modeling and through simulation predict the characteristics of two classes of commercial engines (CFM56 and GE90). In prior years, the High Performance Computing and Communication (HPCC) program funded, NASA Glenn in developing a large scale, detailed simulations for the analysis and design of aircraft engines called the Numerical Propulsion System Simulation (NPSS). Three major aspects of this modeling included the integration of different engine components, coupling of multiple disciplines, and engine component zooming at appropriate level fidelity, require relatively tight coupling of different analysis codes. Most of these codes in aerodynamics and solid mechanics are written in Fortran. Refitting these legacy Fortran codes with distributed objects can increase these codes reusability. Aviation Safety s modeling and simulation use in characterizing fleet management has similar needs. The modeling and simulation of these propulsion systems use existing Fortran and C codes that are instrumental in determining the performance of the fleet. The research centers on building a CORBA-based development environment for programmers to easily wrap and couple legacy Fortran codes. This environment consists of a C++ wrapper library to hide the details of CORBA and an efficient remote variable scheme to facilitate data exchange between the client and the server model. Additionally, a Web Service model should also be constructed for evaluation of this technology s use over the next two- three years.

  4. Adoption of Test Driven Development and Continuous Integration for the Development of the Trick Simulation Toolkit

    NASA Technical Reports Server (NTRS)

    Penn, John M.

    2013-01-01

    This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA/Johnson Space Center and many other NASA facilities. It describes what was learned and the significant benefits seen, such as fast, thorough, and clear test feedback every time code is checked-in to the code repository. It also describes a system that encourages development of code that is much more flexible, maintainable, and reliable. The Trick Simulation Toolkit development environment provides a common architecture for user-defined simulations. Trick builds executable simulations using user-supplied simulation-definition files (S_define) and user supplied "model code". For each Trick-based simulation, Trick automatically provides job scheduling, checkpoint / restore, data-recording, interactive variable manipulation (variable server), and an input-processor. Also included are tools for plotting recorded data and various other supporting tools and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX. Prior to adopting this new development approach, Trick testing consisted primarily of running a few large simulations, with the hope that their complexity and scale would exercise most of Trick's code and expose any recently introduced bugs. Unsurprising, this approach yielded inconsistent results. It was obvious that a more systematic, thorough approach was required. After seeing examples of some Java-based projects that used the JUnit test framework, similar test frameworks for C and C++ were sought. Several were found, all clearly inspired by JUnit. Googletest, a freely available Open source testing framework, was selected as the most appropriate and capable. The new approach was implemented while rewriting the Trick memory management component, to eliminate a fundamental design flaw. The benefits became obvious almost immediately, not just in the correctness of the individual functions and classes but also in the correctness and flexibility being added to the overall design. Creating code to be testable, and testing as it was created resulted not only in better working code, but also in better-organized, flexible, and readable (i.e., articulate) code. This was, in essence the Test-driven development (TDD) methodology created by Kent Beck. Seeing the benefits of Test Driven Development, other Trick components were refactored to make them more testable and tests were designed and implemented for them.

  5. Effects of climate change on daily minimum and maximum temperatures and cloudiness in the Shikoku region: a statistical downscaling model approach

    NASA Astrophysics Data System (ADS)

    Tatsumi, Kenichi; Oizumi, Tsutao; Yamashiki, Yosuke

    2015-04-01

    In this study, we present a detailed analysis of the effect of changes in cloudiness (CLD) between a future period (2071-2099) and the base period (1961-1990) on daily minimum temperature (TMIN) and maximum temperature (TMAX) in the same period for the Shikoku region, Japan. This analysis was performed using climate data obtained with the use of the Statistical DownScaling Model (SDSM). We calibrated the SDSM using the National Center for Environmental Prediction (NCEP) reanalysis dataset for the SDSM input and daily time series of temperature and CLD from 10 surface data points (SDP) in Shikoku. Subsequently, we validated the SDSM outputs, specifically, TMIN, TMAX, and CLD, obtained with the use of the NCEP reanalysis dataset and general circulation model (GCM) data against the SDP. The GCM data used in the validation procedure were those from the Hadley Centre Coupled Model, version 3 (HadCM3) for the Special Report on Emission Scenarios (SRES) A2 and B2 scenarios and from the third generation Coupled Global Climate Model (CGCM3) for the SRES A2 and A1B scenarios. Finally, the validated SDSM was run to study the effect of future changes in CLD on TMIN and TMAX. Our analysis showed that (1) the negative linear fit between changes in TMAX and those in CLD was statistically significant in winter while the relationship between the two changes was not evident in summer, (2) the dependency of future changes in TMAX and TMIN on future changes in CLD were more evident in winter than in other seasons with the present SDSM, (3) the diurnal temperature range (DTR) decreased in the southern part of Shikoku in summer in all the SDSM projections while DTR increased in the northern part of Shikoku in the same season in these projections, (4) the dependencies of changes in DTR on changes in CLD were unclear in summer and winter. Results of the SDSM simulations performed for climate change scenarios such as those from this study contribute to local-scale agricultural and hydrological simulations and development of agricultural and hydrological models.

  6. Edge-relevant plasma simulations with the continuum code COGENT

    NASA Astrophysics Data System (ADS)

    Dorf, M.; Dorr, M.; Ghosh, D.; Hittinger, J.; Rognlien, T.; Cohen, R.; Lee, W.; Schwartz, P.

    2016-10-01

    We describe recent advances in cross-separatrix and other edge-relevant plasma simulations with COGENT, a continuum gyro-kinetic code being developed by the Edge Simulation Laboratory (ESL) collaboration. The distinguishing feature of the COGENT code is its high-order finite-volume discretization methods, which employ arbitrary mapped multiblock grid technology (nearly field-aligned on blocks) to handle the complexity of tokamak divertor geometry with high accuracy. This paper discusses the 4D (axisymmetric) electrostatic version of the code, and the presented topics include: (a) initial simulations with kinetic electrons and development of reduced fluid models; (b) development and application of implicit-explicit (IMEX) time integration schemes; and (c) conservative modeling of drift-waves and the universal instability. Work performed for USDOE, at LLNL under contract DE-AC52-07NA27344 and at LBNL under contract DE-AC02-05CH11231.

  7. The VENUS/NWChem software package. Tight coupling between chemical dynamics simulations and electronic structure theory

    NASA Astrophysics Data System (ADS)

    Lourderaj, Upakarasamy; Sun, Rui; Kohale, Swapnil C.; Barnes, George L.; de Jong, Wibe A.; Windus, Theresa L.; Hase, William L.

    2014-03-01

    The interface for VENUS and NWChem, and the resulting software package for direct dynamics simulations are described. The coupling of the two codes is considered to be a tight coupling since the two codes are compiled and linked together and act as one executable with data being passed between the two codes through routine calls. The advantages of this type of coupling are discussed. The interface has been designed to have as little interference as possible with the core codes of both VENUS and NWChem. VENUS is the code that propagates the direct dynamics trajectories and, therefore, is the program that drives the overall execution of VENUS/NWChem. VENUS has remained an essentially sequential code, which uses the highly parallel structure of NWChem. Subroutines of the interface that accomplish the data transmission and communication between the two computer programs are described. Recent examples of the use of VENUS/NWChem for direct dynamics simulations are summarized.

  8. The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, B.; Wood, K.

    2018-04-01

    We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.

  9. Global Magnetohydrodynamic Simulation Using High Performance FORTRAN on Parallel Computers

    NASA Astrophysics Data System (ADS)

    Ogino, T.

    High Performance Fortran (HPF) is one of modern and common techniques to achieve high performance parallel computation. We have translated a 3-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF/JA on the Fujitsu VPP5000/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A 3-dimensional global MHD simulation of the earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5 VPP5000/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF/JA, and a code in HPF/JA may be expected to perform comparably to the same code written in VPP Fortran.

  10. Four-Dimensional Continuum Gyrokinetic Code: Neoclassical Simulation of Fusion Edge Plasmas

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2005-10-01

    We are developing a continuum gyrokinetic code, TEMPEST, to simulate edge plasmas. Our code represents velocity space via a grid in equilibrium energy and magnetic moment variables, and configuration space via poloidal magnetic flux and poloidal angle. The geometry is that of a fully diverted tokamak (single or double null) and so includes boundary conditions for both closed magnetic flux surfaces and open field lines. The 4-dimensional code includes kinetic electrons and ions, and electrostatic field-solver options, and simulates neoclassical transport. The present implementation is a Method of Lines approach where spatial finite-differences (higher order upwinding) and implicit time advancement are used. We present results of initial verification and validation studies: transition from collisional to collisionless limits of parallel end-loss in the scrape-off layer, self-consistent electric field, and the effect of the real X-point geometry and edge plasma conditions on the standard neoclassical theory, including a comparison of our 4D code with other kinetic neoclassical codes and experiments.

  11. MMAPDNG: A new, fast code backed by a memory-mapped database for simulating delayed γ-ray emission with MCNPX package

    NASA Astrophysics Data System (ADS)

    Lou, Tak Pui; Ludewigt, Bernhard

    2015-09-01

    The simulation of the emission of beta-delayed gamma rays following nuclear fission and the calculation of time-dependent energy spectra is a computational challenge. The widely used radiation transport code MCNPX includes a delayed gamma-ray routine that is inefficient and not suitable for simulating complex problems. This paper describes the code "MMAPDNG" (Memory-Mapped Delayed Neutron and Gamma), an optimized delayed gamma module written in C, discusses usage and merits of the code, and presents results. The approach is based on storing required Fission Product Yield (FPY) data, decay data, and delayed particle data in a memory-mapped file. When compared to the original delayed gamma-ray code in MCNPX, memory utilization is reduced by two orders of magnitude and the ray sampling is sped up by three orders of magnitude. Other delayed particles such as neutrons and electrons can be implemented in future versions of MMAPDNG code using its existing framework.

  12. Capabilities needed for the next generation of thermo-hydraulic codes for use in real time applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arndt, S.A.

    1997-07-01

    The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for codemore » use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.« less

  13. Enhanced Verification Test Suite for Physics Simulation Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, J R; Brock, J S; Brandon, S T

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest.more » This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of greater sophistication or other physics regimes (e.g., energetic material response, magneto-hydrodynamics), would represent a scientifically desirable complement to the fundamental test cases discussed in this report. The authors believe that this document can be used to enhance the verification analyses undertaken at the DOE WP Laboratories and, thus, to improve the quality, credibility, and usefulness of the simulation codes that are analyzed with these problems.« less

  14. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.

    Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  15. DYNECHARM++: a toolkit to simulate coherent interactions of high-energy charged particles in complex structures

    NASA Astrophysics Data System (ADS)

    Bagli, Enrico; Guidi, Vincenzo

    2013-08-01

    A toolkit for the simulation of coherent interactions between high-energy charged particles and complex crystal structures, called DYNECHARM++ has been developed. The code has been written in C++ language taking advantage of this object-oriented programing method. The code is capable to evaluating the electrical characteristics of complex atomic structures and to simulate and track the particle trajectory within them. Calculation method of electrical characteristics based on their expansion in Fourier series has been adopted. Two different approaches to simulate the interaction have been adopted, relying on the full integration of particle trajectories under the continuum potential approximation and on the definition of cross-sections of coherent processes. Finally, the code has proved to reproduce experimental results and to simulate interaction of charged particles with complex structures.

  16. RTM user's guide

    NASA Technical Reports Server (NTRS)

    Claus, Steven J.; Loos, Alfred C.

    1989-01-01

    RTM is a FORTRAN '77 computer code which simulates the infiltration of textile reinforcements and the kinetics of thermosetting polymer resin systems. The computer code is based on the process simulation model developed by the author. The compaction of dry, woven textile composites is simulated to describe the increase in fiber volume fraction with increasing compaction pressure. Infiltration is assumed to follow D'Arcy's law for Newtonian viscous fluids. The chemical changes which occur in the resin during processing are simulated with a thermo-kinetics model. The computer code is discussed on the basis of the required input data, output files and some comments on how to interpret the results. An example problem is solved and a complete listing is included.

  17. Gamma irradiator dose mapping simulation using the MCNP code and benchmarking with dosimetry.

    PubMed

    Sohrabpour, M; Hassanzadeh, M; Shahriari, M; Sharifzadeh, M

    2002-10-01

    The Monte Carlo transport code, MCNP, has been applied in simulating dose rate distribution in the IR-136 gamma irradiator system. Isodose curves, cumulative dose values, and system design data such as throughputs, over-dose-ratios, and efficiencies have been simulated as functions of product density. Simulated isodose curves, and cumulative dose values were compared with dosimetry values obtained using polymethyle-methacrylate, Fricke, ethanol-chlorobenzene, and potassium dichromate dosimeters. The produced system design data were also found to agree quite favorably with those of the system manufacturer's data. MCNP has thus been found to be an effective transport code for handling of various dose mapping excercises for gamma irradiators.

  18. Use, Assessment, and Improvement of the Loci-CHEM CFD Code for Simulation of Combustion in a Single Element GO2/GH2 Injector and Chamber

    NASA Technical Reports Server (NTRS)

    Westra, Douglas G.; Lin, Jeff; West, Jeff; Tucker, Kevin

    2006-01-01

    This document is a viewgraph presentation of a paper that documents a continuing effort at Marshall Space Flight Center (MSFC) to use, assess, and continually improve CFD codes to the point of material utility in the design of rocket engine combustion devices. This paper describes how the code is presently being used to simulate combustion in a single element combustion chamber with shear coaxial injectors using gaseous oxygen and gaseous hydrogen propellants. The ultimate purpose of the efforts documented is to assess and further improve the Loci-CHEM code and the implementation of it. Single element shear coaxial injectors were tested as part of the Staged Combustion Injector Technology (SCIT) program, where detailed chamber wall heat fluxes were measured. Data was taken over a range of chamber pressures for propellants injected at both ambient and elevated temperatures. Several test cases are simulated as part of the effort to demonstrate use of the Loci-CHEM CFD code and to enable us to make improvements in the code as needed. The simulations presented also include a grid independence study on hybrid grids. Several two-equation eddy viscosity low Reynolds number turbulence models are also evaluated as part of the study. All calculations are presented with a comparison to the experimental data. Weaknesses of the code relative to test data are discussed and continuing efforts to improve the code are presented.

  19. Reducing EnergyPlus Run Time For Code Compliance Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less

  20. MHD code using multi graphical processing units: SMAUG+

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  1. Impacts of the land-lake breeze of the Volta reservoir on the diurnal cycle of cloudiness and precipitation

    NASA Astrophysics Data System (ADS)

    Buchholz, Marcel; Fink, Andreas H.; Knippertz, Peter; Yorke, Charles

    2017-04-01

    Lake Volta in Ghana is the artificial lake on Earth with the largest surface area (8502 km2). It has been constructed in the early 1960s, with the lake being filled around 1966. Land-lake breezes and their effects on the diurnal cycle of local wind systems, cloudiness, and precipitation have been studied for several tropical lakes, among which studies on the effects of Lake Victoria in East Africa are one of the most perceived ones. To date, no studies on the strengths and effects of the land-lake breeze of the Volta reservoir are known to the authors. Using surface station data, a variety of satellite data on clouds and precipitation, and a convection-resolving regional model, the land-lake breeze and its impacts were studied for Lake Volta between 1998 and 2015. The observational data sets confirm a significant land-lake circulation. The only manned weather station operated by the Ghana Meteorological Service that is situated at the lake is Kete Krachi. Hourly observations for 2006 and 2014 show on several days a clearing of skies in the afternoon associated with a shift in the surface winds from southwest to southeast, the latter potentially indicating a lake breeze effect. Cloud occurrence frequency derived from the CLARA-A2, MODIS, and CLAAS2 cloud masks and the cloud physical properties from CLAAS2 clearly show the development of clouds at the lake breeze front in the course of the morning and around mid-day. This effect is most pronounced in March when also the difference between the surface temperatures of the lake and the desiccated land surface is strongest. During the peak of the wet season in July, the lake breeze cloudiness is masked by a high background cloudiness and likely also weaker due to the strong southwesterly monsoon flow that tends to weaken the land-lake circulation. However, the precipitation signal was found to be strongest in July, most probably due to the fact that in boreal fall, winter and spring, the lake breeze cloudiness often fails to develop into afternoon showers or thunderstorms, or if, they are short-lived with substantial below-cloud evaporation. Two cases in 2007 and 2014 were synoptically analyzed with weather charts and modeled using the COSMO model, the current regional operational weather forecasting model of the German Weather Service (DWD). The COSMO experiments with and without the lake were integrated for 48 hours at convection-resolving resolution of 2.8 km. Initial and boundary conditions were taken from ECWMF operational analysis. Model results confirm the development of the daytime lake breeze and suggest that the existence of the lake has substantially changed the local circulation, cloudiness and precipitation regime. Our results imply a significant impact of the artificial lake on the local climate and ecosystems that warrants further study.

  2. Monte Carlo Modeling of the Initial Radiation Emitted by a Nuclear Device in the National Capital Region

    DTIC Science & Technology

    2013-07-01

    also simulated in the models. Data was derived from calculations using the three-dimensional Monte Carlo radiation transport code MCNP (Monte Carlo N...32  B.  MCNP PHYSICS OPTIONS ......................................................................................... 33  C.  HAZUS...input deck’) for the MCNP , Monte Carlo N-Particle, radiation transport code. MCNP is a general-purpose code designed to simulate neutron, photon

  3. Implicit Coupling Approach for Simulation of Charring Carbon Ablators

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Gokcen, Tahir

    2013-01-01

    This study demonstrates that coupling of a material thermal response code and a flow solver with nonequilibrium gas/surface interaction for simulation of charring carbon ablators can be performed using an implicit approach. The material thermal response code used in this study is the three-dimensional version of Fully Implicit Ablation and Thermal response program, which predicts charring material thermal response and shape change on hypersonic space vehicles. The flow code solves the reacting Navier-Stokes equations using Data Parallel Line Relaxation method. Coupling between the material response and flow codes is performed by solving the surface mass balance in flow solver and the surface energy balance in material response code. Thus, the material surface recession is predicted in flow code, and the surface temperature and pyrolysis gas injection rate are computed in material response code. It is demonstrated that the time-lagged explicit approach is sufficient for simulations at low surface heating conditions, in which the surface ablation rate is not a strong function of the surface temperature. At elevated surface heating conditions, the implicit approach has to be taken, because the carbon ablation rate becomes a stiff function of the surface temperature, and thus the explicit approach appears to be inappropriate resulting in severe numerical oscillations of predicted surface temperature. Implicit coupling for simulation of arc-jet models is performed, and the predictions are compared with measured data. Implicit coupling for trajectory based simulation of Stardust fore-body heat shield is also conducted. The predicted stagnation point total recession is compared with that predicted using the chemical equilibrium surface assumption

  4. 2002 Blue Marble and Developments in HDTV Technology for Public Outreach

    NASA Technical Reports Server (NTRS)

    Hasler, Fritz; Starr, David OC. (Technical Monitor)

    2001-01-01

    Fritz Hasler (NASA/Goddard) will demonstrate the latest Blue Marble Digital Earth technology. We will fly in from space through Terra, Landsat 7, to 1 m Ikonos "Spy Satellite" data of Disney World and the Orlando Convention Center. You will see the complete global cloud free and cloudy 500 m datasets from the EOS Terra satellite. Spectacular new animations from Terra, Landsat 7, and SeaWiFS will be presented. See also animations of the hurricanes & tropical storms of the 2001 season, as well as Floyd, Georges, and Mitch, etc. from GOES & TRMM supported by MM5 3-D nested numerical model results. See movies assembled using new low cost HDTV nonlinear editing equipment that is revolutionizing the way we communicate scientific results. See climate change in action with Global Land & Ocean productivity changes over the last 20 years. Remote sensing observations of ocean SST, height, winds, color, and El Nino from GOES, AVHRR, SSMI & SeaWiFS are put in context with atmospheric and ocean simulations. Compare symmetrical equatorial eddies observed by GOES with the simulations.

  5. Uniform-burning matrix burner

    DOEpatents

    Bohn, Mark S.; Anselmo, Mark

    2001-01-01

    Computer simulation was used in the development of an inward-burning, radial matrix gas burner and heat pipe heat exchanger. The burner and exchanger can be used to heat a Stirling engine on cloudy days when a solar dish, the normal source of heat, cannot be used. Geometrical requirements of the application forced the use of the inward burning approach, which presents difficulty in achieving a good flow distribution and air/fuel mixing. The present invention solved the problem by providing a plenum with just the right properties, which include good flow distribution and good air/fuel mixing with minimum residence time. CFD simulations were also used to help design the primary heat exchanger needed for this application which includes a plurality of pins emanating from the heat pipe. The system uses multiple inlet ports, an extended distance from the fuel inlet to the burner matrix, flow divider vanes, and a ring-shaped, porous grid to obtain a high-temperature uniform-heat radial burner. Ideal applications include dish/Stirling engines, steam reforming of hydrocarbons, glass working, and any process requiring high temperature heating of the outside surface of a cylindrical surface.

  6. Epoch of Reionization : An Investigation of the Semi-Analytic 21CMMC Code

    NASA Astrophysics Data System (ADS)

    Miller, Michelle

    2018-01-01

    After the Big Bang the universe was filled with neutral hydrogen that began to cool and collapse into the first structures. These first stars and galaxies began to emit radiation that eventually ionized all of the neutral hydrogen in the universe. 21CMMC is a semi-numerical code that takes simulated boxes of this ionized universe from another code called 21cmFAST. Mock measurements are taken from the simulated boxes in 21cmFAST. Those measurements are thrown into 21CMMC and help us determine three major parameters of this simulated universe: virial temperature, mean free path, and ionization efficiency. My project tests the robustness of 21CMMC on universe simulations other than 21cmFAST to see whether 21CMMC can properly reconstruct early universe parameters given a mock “measurement” in the form of power spectra. We determine that while two of the three EoR parameters (Virial Temperature and Efficiency) have some reconstructability, the mean free path parameter in the code is the least robust. This requires development of the 21CMMC code.

  7. Efficient Modeling of Laser-Plasma Accelerators with INF and RNO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, C.; Schroeder, C. B.; Esarey, E.

    2010-11-04

    The numerical modeling code INF and RNO (INtegrated Fluid and paRticle simulatioN cOde, pronounced 'inferno') is presented. INF and RNO is an efficient 2D cylindrical code to model the interaction of a short laser pulse with an underdense plasma. The code is based on an envelope model for the laser while either a PIC or a fluid description can be used for the plasma. The effect of the laser pulse on the plasma is modeled with the time-averaged poderomotive force. These and other features allow for a speedup of 2-4 orders of magnitude compared to standard full PIC simulations whilemore » still retaining physical fidelity. The code has been benchmarked against analytical solutions and 3D PIC simulations and here a set of validation tests together with a discussion of the performances are presented.« less

  8. Comparing Turbulence Simulation with Experiment in DIII-D

    NASA Astrophysics Data System (ADS)

    Ross, D. W.; Bravenec, R. V.; Dorland, W.; Beer, M. A.; Hammett, G. W.; McKee, G. R.; Murakami, M.; Jackson, G. L.

    2000-10-01

    Gyrofluid simulations of DIII-D discharges with the GRYFFIN code(D. W. Ross et al.), Transport Task Force Workshop, Burlington, VT, (2000). are compared with transport and fluctuation measurements. The evolution of confinement-improved discharges(G. R. McKee et al.), Phys. Plasmas 7, 1870 (200) is studied at early times following impurity injection, when EXB rotational shear plays a small role. The ion thermal transport predicted by the code is consistent with the experimental values. Experimentally, changes in density profiles resulting from the injection of neon, lead to reduction in fluctuation levels and transport following the injection. This triggers subsequent changes in the shearing rate that further reduce the turbulence.(M. Murakami et al.), European Physical Society, Budapest (2000); M. Murakami et al., this meeting. Estimated uncertainties in the plasma profiles, however, make it difficult to simulate these reductions with the code. These cases will also be studied with the GS2 gyrokinetic code.

  9. Program optimizations: The interplay between power, performance, and energy

    DOE PAGES

    Leon, Edgar A.; Karlin, Ian; Grant, Ryan E.; ...

    2016-05-16

    Practical considerations for future supercomputer designs will impose limits on both instantaneous power consumption and total energy consumption. Working within these constraints while providing the maximum possible performance, application developers will need to optimize their code for speed alongside power and energy concerns. This paper analyzes the effectiveness of several code optimizations including loop fusion, data structure transformations, and global allocations. A per component measurement and analysis of different architectures is performed, enabling the examination of code optimizations on different compute subsystems. Using an explicit hydrodynamics proxy application from the U.S. Department of Energy, LULESH, we show how code optimizationsmore » impact different computational phases of the simulation. This provides insight for simulation developers into the best optimizations to use during particular simulation compute phases when optimizing code for future supercomputing platforms. Here, we examine and contrast both x86 and Blue Gene architectures with respect to these optimizations.« less

  10. Secure web-based invocation of large-scale plasma simulation codes

    NASA Astrophysics Data System (ADS)

    Dimitrov, D. A.; Busby, R.; Exby, J.; Bruhwiler, D. L.; Cary, J. R.

    2004-12-01

    We present our design and initial implementation of a web-based system for running, both in parallel and serial, Particle-In-Cell (PIC) codes for plasma simulations with automatic post processing and generation of visual diagnostics.

  11. The SCEC/USGS dynamic earthquake rupture code verification exercise

    USGS Publications Warehouse

    Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.

    2009-01-01

    Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.

  12. Blast and the Consequences on Traumatic Brain Injury-Multiscale Mechanical Modeling of Brain

    DTIC Science & Technology

    2011-02-17

    blast simulation. LS-DYNA as an explicit FE code has been employed to simulate this multi- material fluid –structure interaction problem. The 3-D head...formulation is implemented to model the air-blast simulation. LS-DYNA as an explicit FE code has been employed to simulate this multi-material fluid ...Biomechanics Study of Influencing Parameters for brain under Impact ............................... 12 5.1 The Impact of Cerebrospinal Fluid

  13. Computational simulation of progressive fracture in fiber composites

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1986-01-01

    Computational methods for simulating and predicting progressive fracture in fiber composite structures are presented. These methods are integrated into a computer code of modular form. The modules include composite mechanics, finite element analysis, and fracture criteria. The code is used to computationally simulate progressive fracture in composite laminates with and without defects. The simulation tracks the fracture progression in terms of modes initiating fracture, damage growth, and imminent global (catastrophic) laminate fracture.

  14. Acceleration of Monte Carlo simulation of photon migration in complex heterogeneous media using Intel many-integrated core architecture.

    PubMed

    Gorshkov, Anton V; Kirillin, Mikhail Yu

    2015-08-01

    Over two decades, the Monte Carlo technique has become a gold standard in simulation of light propagation in turbid media, including biotissues. Technological solutions provide further advances of this technique. The Intel Xeon Phi coprocessor is a new type of accelerator for highly parallel general purpose computing, which allows execution of a wide range of applications without substantial code modification. We present a technical approach of porting our previously developed Monte Carlo (MC) code for simulation of light transport in tissues to the Intel Xeon Phi coprocessor. We show that employing the accelerator allows reducing computational time of MC simulation and obtaining simulation speed-up comparable to GPU. We demonstrate the performance of the developed code for simulation of light transport in the human head and determination of the measurement volume in near-infrared spectroscopy brain sensing.

  15. Component Framework for Loosely Coupled High Performance Integrated Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Elwasif, W. R.; Bernholdt, D. E.; Shet, A. G.; Batchelor, D. B.; Foley, S.

    2010-11-01

    We present the design and implementation of a component-based simulation framework for the execution of coupled time-dependent plasma modeling codes. The Integrated Plasma Simulator (IPS) provides a flexible lightweight component model that streamlines the integration of stand alone codes into coupled simulations. Standalone codes are adapted to the IPS component interface specification using a thin wrapping layer implemented in the Python programming language. The framework provides services for inter-component method invocation, configuration, task, and data management, asynchronous event management, simulation monitoring, and checkpoint/restart capabilities. Services are invoked, as needed, by the computational components to coordinate the execution of different aspects of coupled simulations on Massive parallel Processing (MPP) machines. A common plasma state layer serves as the foundation for inter-component, file-based data exchange. The IPS design principles, implementation details, and execution model will be presented, along with an overview of several use cases.

  16. Monte Carlo simulations in Nuclear Medicine

    NASA Astrophysics Data System (ADS)

    Loudos, George K.

    2007-11-01

    Molecular imaging technologies provide unique abilities to localise signs of disease before symptoms appear, assist in drug testing, optimize and personalize therapy, and assess the efficacy of treatment regimes for different types of cancer. Monte Carlo simulation packages are used as an important tool for the optimal design of detector systems. In addition they have demonstrated potential to improve image quality and acquisition protocols. Many general purpose (MCNP, Geant4, etc) or dedicated codes (SimSET etc) have been developed aiming to provide accurate and fast results. Special emphasis will be given to GATE toolkit. The GATE code currently under development by the OpenGATE collaboration is the most accurate and promising code for performing realistic simulations. The purpose of this article is to introduce the non expert reader to the current status of MC simulations in nuclear medicine and briefly provide examples of current simulated systems, and present future challenges that include simulation of clinical studies and dosimetry applications.

  17. Molecular dynamics and dynamic Monte-Carlo simulation of irradiation damage with focused ion beams

    NASA Astrophysics Data System (ADS)

    Ohya, Kaoru

    2017-03-01

    The focused ion beam (FIB) has become an important tool for micro- and nanostructuring of samples such as milling, deposition and imaging. However, this leads to damage of the surface on the nanometer scale from implanted projectile ions and recoiled material atoms. It is therefore important to investigate each kind of damage quantitatively. We present a dynamic Monte-Carlo (MC) simulation code to simulate the morphological and compositional changes of a multilayered sample under ion irradiation and a molecular dynamics (MD) simulation code to simulate dose-dependent changes in the backscattering-ion (BSI)/secondary-electron (SE) yields of a crystalline sample. Recent progress in the codes for research to simulate the surface morphology and Mo/Si layers intermixing in an EUV lithography mask irradiated with FIBs, and the crystalline orientation effect on BSI and SE yields relating to the channeling contrast in scanning ion microscopes, is also presented.

  18. OSCAR a Matlab based optical FFT code

    NASA Astrophysics Data System (ADS)

    Degallaix, Jérôme

    2010-05-01

    Optical simulation softwares are essential tools for designing and commissioning laser interferometers. This article aims to introduce OSCAR, a Matlab based FFT code, to the experimentalist community. OSCAR (Optical Simulation Containing Ansys Results) is used to simulate the steady state electric fields in optical cavities with realistic mirrors. The main advantage of OSCAR over other similar packages is the simplicity of its code requiring only a short time to master. As a result, even for a beginner, it is relatively easy to modify OSCAR to suit other specific purposes. OSCAR includes an extensive manual and numerous detailed examples such as simulating thermal aberration, calculating cavity eigen modes and diffraction loss, simulating flat beam cavities and three mirror ring cavities. An example is also provided about how to run OSCAR on the GPU of modern graphic cards instead of the CPU, making the simulation up to 20 times faster.

  19. Sensitivity analysis of observed reflectivity to ice particle surface roughness using MISR satellite observations

    NASA Astrophysics Data System (ADS)

    Bell, A.; Hioki, S.; Wang, Y.; Yang, P.; Di Girolamo, L.

    2016-12-01

    Previous studies found that including ice particle surface roughness in forward light scattering calculations significantly reduces the differences between observed and simulated polarimetric and radiometric observations. While it is suggested that some degree of roughness is desirable, the appropriate degree of surface roughness to be assumed in operational cloud property retrievals and the sensitivity of retrieval products to this assumption remains uncertain. In an effort to extricate this ambiguity, we will present a sensitivity analysis of space-borne multi-angle observations of reflectivity, to varying degrees of surface roughness. This process is two fold. First, sampling information and statistics of Multi-angle Imaging SpectroRadiometer (MISR) sensor data aboard the Terra platform, will be used to define the most coming viewing observation geometries. Using these defined geometries, reflectivity will be simulated for multiple degrees of roughness using results from adding-doubling radiative transfer simulations. Sensitivity of simulated reflectivity to surface roughness can then be quantified, thus yielding a more robust retrieval system. Secondly, sensitivity of the inverse problem will be analyzed. Spherical albedo values will be computed by feeding blocks of MISR data comprising cloudy pixels over ocean into the retrieval system, with assumed values of surface roughness. The sensitivity of spherical albedo to the inclusion of surface roughness can then be quantified, and the accuracy of retrieved parameters can be determined.

  20. Improving Subtropical Boundary Layer Cloudiness in the 2011 NCEP GFS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J. K.; Bretherton, Christopher S.; Xiao, Heng

    2014-09-23

    The current operational version of National Centers for Environmental Prediction (NCEP) Global Forecasting System (GFS) shows significant low cloud bias. These biases also appear in the Coupled Forecast System (CFS), which is developed from the GFS. These low cloud biases degrade seasonal and longer climate forecasts, particularly of short-wave cloud radiative forcing, and affect predicted sea surface temperature. Reducing this bias in the GFS will aid the development of future CFS versions and contributes to NCEP's goal of unified weather and climate modelling. Changes are made to the shallow convection and planetary boundary layer parameterisations to make them more consistentmore » with current knowledge of these processes and to reduce the low cloud bias. These changes are tested in a single-column version of GFS and in global simulations with GFS coupled to a dynamical ocean model. In the single-column model, we focus on changing parameters that set the following: the strength of shallow cumulus lateral entrainment, the conversion of updraught liquid water to precipitation and grid-scale condensate, shallow cumulus cloud top, and the effect of shallow convection in stratocumulus environments. Results show that these changes improve the single-column simulations when compared to large eddy simulations, in particular through decreasing the precipitation efficiency of boundary layer clouds. These changes, combined with a few other model improvements, also reduce boundary layer cloud and albedo biases in global coupled simulations.« less

Top