Grogan, Brandon R
2010-05-01
This report presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects nonintrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross sections of features inside the object can be determined. The cross sections can then be used to identify the materials, and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons that are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized, and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements, and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using the
Grogan, Brandon Robert
2010-03-01
This dissertation presents a novel method for removing scattering effects from Nuclear Materials Identification System (NMIS) imaging. The NMIS uses fast neutron radiography to generate images of the internal structure of objects non-intrusively. If the correct attenuation through the object is measured, the positions and macroscopic cross-sections of features inside the object can be determined. The cross sections can then be used to identify the materials and a 3D map of the interior of the object can be reconstructed. Unfortunately, the measured attenuation values are always too low because scattered neutrons contribute to the unattenuated neutron signal. Previous efforts to remove the scatter from NMIS imaging have focused on minimizing the fraction of scattered neutrons which are misidentified as directly transmitted by electronically collimating and time tagging the source neutrons. The parameterized scatter removal algorithm (PSRA) approaches the problem from an entirely new direction by using Monte Carlo simulations to estimate the point scatter functions (PScFs) produced by neutrons scattering in the object. PScFs have been used to remove scattering successfully in other applications, but only with simple 2D detector models. This work represents the first time PScFs have ever been applied to an imaging detector geometry as complicated as the NMIS. By fitting the PScFs using a Gaussian function, they can be parameterized and the proper scatter for a given problem can be removed without the need for rerunning the simulations each time. In order to model the PScFs, an entirely new method for simulating NMIS measurements was developed for this work. The development of the new models and the codes required to simulate them are presented in detail. The PSRA was used on several simulated and experimental measurements and chi-squared goodness of fit tests were used to compare the corrected values to the ideal values that would be expected with no scattering. Using
Comment on S-matrix parameterizations in NN-scattering
Mulders, P. J.
1981-08-01
The parameterization of the S-matrix used for the elastic part of the NN-scattering matrix in, for example, the Virginia Polytechnic Institute ineractive nucleon-nucleon program SAID, is not general enough to parameterize any 2 by 2 submatrix of a unitary matrix.
NASA Astrophysics Data System (ADS)
Yang, Ping; Liou, Kuo-Nan; Bi, Lei; Liu, Chao; Yi, Bingqi; Baum, Bryan A.
2015-01-01
Presented is a review of the radiative properties of ice clouds from three perspectives: light scattering simulations, remote sensing applications, and broadband radiation parameterizations appropriate for numerical models. On the subject of light scattering simulations, several classical computational approaches are reviewed, including the conventional geometric-optics method and its improved forms, the finite-difference time domain technique, the pseudo-spectral time domain technique, the discrete dipole approximation method, and the T-matrix method, with specific applications to the computation of the single-scattering properties of individual ice crystals. The strengths and weaknesses associated with each approach are discussed. With reference to remote sensing, operational retrieval algorithms are reviewed for retrieving cloud optical depth and effective particle size based on solar or thermal infrared (IR) bands. To illustrate the performance of the current solar- and IR-based retrievals, two case studies are presented based on spaceborne observations. The need for a more realistic ice cloud optical model to obtain spectrally consistent retrievals is demonstrated. Furthermore, to complement ice cloud property studies based on passive radiometric measurements, the advantage of incorporating lidar and/or polarimetric measurements is discussed. The performance of ice cloud models based on the use of different ice habits to represent ice particles is illustrated by comparing model results with satellite observations. A summary is provided of a number of parameterization schemes for ice cloud radiative properties that were developed for application to broadband radiative transfer submodels within general circulation models (GCMs). The availability of the single-scattering properties of complex ice habits has led to more accurate radiation parameterizations. In conclusion, the importance of using nonspherical ice particle models in GCM simulations for climate
NASA Astrophysics Data System (ADS)
Alvarado, Matthew J.; Lonsdale, Chantelle R.; Macintyre, Helen L.; Bian, Huisheng; Chin, Mian; Ridley, David A.; Heald, Colette L.; Thornhill, Kenneth L.; Anderson, Bruce E.; Cubison, Michael J.; Jimenez, Jose L.; Kondo, Yutaka; Sahu, Lokesh K.; Dibb, Jack E.; Wang, Chien
2016-07-01
Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign. The four models are the NASA Global Modeling Initiative (GMI) Combo model, GEOS-Chem v9-02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT), and the Optical Properties of Aerosol and Clouds (OPAC v3.1) package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1) to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC) on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10-23 %, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GC-RT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass fraction
Mang, J.T.; Hjelm, R.P.; Skidmore, C.B.; Howe, P.M.
1996-07-01
High explosive materials used in the nuclear stockpile are composites of crystalline high explosives (HE) with binder materials, such as Estane. In such materials, there are naturally occurring density fluctuations (defects) due to cracks, internal (in the HE) and external (in the binder) voids and other artifacts of preparation. Changes in such defects due to material aging can affect the response of explosives due to shock, impact and thermal loading. Modeling efforts are attempting to provide quantitative descriptions of explosive response from the lowest ignition thresholds to the development of full blown detonations and explosions, however, adequate descriptions of these processes require accurate measurements of a number of structural parameters of the HE composite. Since different defects are believed to affect explosive sensitivity in different ways it is necessary to quantitatively differentiate between defect types. The authors report here preliminary results of SANS measurements on surrogates for HE materials. The objective of these measurements was to develop methodologies using SANS techniques to parameterize internal void size distributions in a surrogate material, sugar, to simulate an HE used in the stockpile, HMX. Sugar is a natural choice as a surrogate material, as it has the same crystal structure, has similar intragranular voids and has similar mechanical properties as HMX. It is used extensively as a mock material for explosives. Samples were used with two void size distributions: one with a sufficiently small mean particle size that only small occluded voids are present in significant concentrations, and one where the void sizes could be larger. By using methods in small-angle neutron scattering, they were able to isolate the scattering arising from particle-liquid interfaces and internal voids.
Laser scattering measurement for laser removal of graffiti
NASA Astrophysics Data System (ADS)
Tearasongsawat, Watcharawee; Kittiboonanan, Phumipat; Luengviriya, Chaiya; Ratanavis, Amarin
2015-07-01
In this contribution, a technical development of the laser scattering measurement for laser removal of graffiti is reported. This study concentrates on the removal of graffiti from metal surfaces. Four colored graffiti paints were applied to stainless steel samples. Cleaning efficiency was evaluated by the laser scattering system. In this study, an angular laser removal of graffiti was attempted to examine the removal process under practical conditions. A Q-switched Nd:YAG laser operating at 1.06 microns with the repetition rate of 1 Hz was used to remove graffiti from stainless steel samples. The laser fluence was investigated from 0.1 J/cm2 to 7 J/cm2. The laser parameters to achieve the removal effectiveness were determined by using the laser scattering system. This study strongly leads to further development of the potential online surface inspection for the removal of graffiti.
NASA Astrophysics Data System (ADS)
Pokhrel, Rudra P.; Wagner, Nick L.; Langridge, Justin M.; Lack, Daniel A.; Jayarathne, Thilina; Stone, Elizabeth A.; Stockwell, Chelsea E.; Yokelson, Robert J.; Murphy, Shane M.
2016-08-01
Single-scattering albedo (SSA) and absorption Ångström exponent (AAE) are two critical parameters in determining the impact of absorbing aerosol on the Earth's radiative balance. Aerosol emitted by biomass burning represent a significant fraction of absorbing aerosol globally, but it remains difficult to accurately predict SSA and AAE for biomass burning aerosol. Black carbon (BC), brown carbon (BrC), and non-absorbing coatings all make substantial contributions to the absorption coefficient of biomass burning aerosol. SSA and AAE cannot be directly predicted based on fuel type because they depend strongly on burn conditions. It has been suggested that SSA can be effectively parameterized via the modified combustion efficiency (MCE) of a biomass burning event and that this would be useful because emission factors for CO and CO2, from which MCE can be calculated, are available for a large number of fuels. Here we demonstrate, with data from the FLAME-4 experiment, that for a wide variety of globally relevant biomass fuels, over a range of combustion conditions, parameterizations of SSA and AAE based on the elemental carbon (EC) to organic carbon (OC) mass ratio are quantitatively superior to parameterizations based on MCE. We show that the EC / OC ratio and the ratio of EC / (EC + OC) both have significantly better correlations with SSA than MCE. Furthermore, the relationship of EC / (EC + OC) with SSA is linear. These improved parameterizations are significant because, similar to MCE, emission factors for EC (or black carbon) and OC are available for a wide range of biomass fuels. Fitting SSA with MCE yields correlation coefficients (Pearson's r) of ˜ 0.65 at the visible wavelengths of 405, 532, and 660 nm while fitting SSA with EC / OC or EC / (EC + OC) yields a Pearson's r of 0.94-0.97 at these same wavelengths. The strong correlation coefficient at 405 nm (r = 0.97) suggests that parameterizations based on EC / OC or EC / (EC + OC) have good predictive
NASA Astrophysics Data System (ADS)
Collard, F.; Ardhuin, F.; Guitton, G.; Dumont, D.; Nicot, P.; Accenti, M.; Girard-Ardhuin, F.
2014-12-01
Sentinel-1A launched by the European Space Agency in April 2014 will complete its full calibration and validation phase including Level2 products early in 2015 but image quality is already good enought for scientific exploitation of observed wave modulations. The larger frequency bandwidth and new acquisition modes are providing a much improved capability for imaging ocean waves in the open water and in the ice compared to Envisat. Here we estimate wave spectra in the Arctic assuming a spatially uniform modulation transfer function where the backscatter over ice is homogeneous, matching the wave heights in open ocean and ice at the ice edge. These wave properties are used to estimate attenuation scales for wavelength longer than twice the radar image resolution. These estimated attenuations are compared to model results based on WAVEWATCH III, where attenuation and scattering uses a combination of friction below the ice and scattering adapted from Dumont et al. (2011) and Williams et al. (2013).
Seed removal by scatter-hoarding rodents: the effects of tannin and nutrient concentration.
Wang, Bo; Yang, Xiaolan
2015-04-01
The mutualistic interaction between scatter-hoarding rodents and seed plants have a long co-evolutionary history. Plants are believed to have evolved traits that influence the foraging behavior of rodents, thus increasing the probability of seed removal and caching, which benefits the establishment of seedlings. Tannin and nutrient content in seeds are considered among the most essential factors in this plant-animal interaction. However, most previous studies used different species of plant seeds, rendering it difficult to tease apart the relative effect of each single nutrient on rodent foraging behavior due to confounding combinations of nutrient contents across seed species. Hence, to further explore how tannin and different nutritional traits of seed affect scatter-hoarding rodent foraging preferences, we manipulated tannin, fat, protein and starch content levels, and also seed size levels by using an artificial seed system. Our results showed that both tannin and various nutrients significantly affected rodent foraging preferences, but were also strongly affected by seed size. In general, rodents preferred to remove seeds with less tannin. Fat addition could counteract the negative effect of tannin on seed removal by rodents, while the effect of protein addition was weaker. Starch by itself had no effect, but it interacted with tannin in a complex way. Our findings shed light on the effects of tannin and nutrient content on seed removal by scatter-hoarding rodents. We therefore, believe that these and perhaps other seed traits should interactively influence this important plant-rodent interaction. However, how selection operates on seed traits to counterbalance these competing interests/factors merits further study.
Seed removal by scatter-hoarding rodents: the effects of tannin and nutrient concentration.
Wang, Bo; Yang, Xiaolan
2015-04-01
The mutualistic interaction between scatter-hoarding rodents and seed plants have a long co-evolutionary history. Plants are believed to have evolved traits that influence the foraging behavior of rodents, thus increasing the probability of seed removal and caching, which benefits the establishment of seedlings. Tannin and nutrient content in seeds are considered among the most essential factors in this plant-animal interaction. However, most previous studies used different species of plant seeds, rendering it difficult to tease apart the relative effect of each single nutrient on rodent foraging behavior due to confounding combinations of nutrient contents across seed species. Hence, to further explore how tannin and different nutritional traits of seed affect scatter-hoarding rodent foraging preferences, we manipulated tannin, fat, protein and starch content levels, and also seed size levels by using an artificial seed system. Our results showed that both tannin and various nutrients significantly affected rodent foraging preferences, but were also strongly affected by seed size. In general, rodents preferred to remove seeds with less tannin. Fat addition could counteract the negative effect of tannin on seed removal by rodents, while the effect of protein addition was weaker. Starch by itself had no effect, but it interacted with tannin in a complex way. Our findings shed light on the effects of tannin and nutrient content on seed removal by scatter-hoarding rodents. We therefore, believe that these and perhaps other seed traits should interactively influence this important plant-rodent interaction. However, how selection operates on seed traits to counterbalance these competing interests/factors merits further study. PMID:25625425
NASA Astrophysics Data System (ADS)
Ryu, Y.; Kobayashi, H.; Welles, J.; Norman, J.
2011-12-01
Correct estimation of gap fraction is essential to quantify canopy architectural variables such as leaf area index and clumping index, which mainly control land-atmosphere interactions. However, gap fraction measurements from optical sensors are contaminated by scattered radiation by canopy and ground surface. In this study, we propose a simple invertible bidirectional transmission model to remove scattering effects from gap fraction measurements. The model shows that 1) scattering factor appears highest where leaf area index is 1-2 in non-clumped canopy, 2) relative scattering factor (scattering factor/measured gap fraction) increases with leaf area index, 3) bright land surface (e.g. snow and bright soil) can contribute a significant scattering factor, 4) the scattering factor is not marginal even in highly diffused sky condition. By incorporating the model with LAI2200 data collected in an open savanna ecosystem, we find that the scattering factor causes significant underestimation of leaf area index (25%) and significant overestimation of clumping index (6 %). The results highlight that some LAI-2000-based LAI estimates from around the world may be underestimated, particularly in highly clumped broad-leaf canopies. Fortunately, the importance of scattering could be assessed with software from LICOR, Inc., which will incorporate the scattering model from this study in a post processing mode after data has been collected by a LAI-2000 or LAI-2200.
Parameterizing the Raindrop Size Distribution
NASA Technical Reports Server (NTRS)
Haddad, Ziad S.; Durden, Stephen L.; Im, Eastwood
1996-01-01
This paper addresses the problem of finding a parametric form for the raindrop size distribution (DSD) that(1) is an appropriate model for tropical rainfall, and (2) involves statistically independent parameters. Such a parameterization is derived in this paper. One of the resulting three "canonical" parameters turns out to vary relatively little, thus making the parameterization particularly useful for remote sensing applications. In fact, a new set of r drop-size-distribution-based Z-R and k-R relations is obtained. Only slightly more complex than power laws, they are very good approximations to the exact radar relations one would obtain using Mie scattering. The coefficients of the new relations are directly related to the shape parameters of the particular DSD that one starts with. Perhaps most important, since the coefficients are independent of the rain rate itself, the relations are ideally suited for rain retrieval algorithms.
Radiation properties and emissivity parameterization of high level thin clouds
NASA Technical Reports Server (NTRS)
Wu, M.-L. C.
1984-01-01
To parameterize emissivity of clouds at 11 microns, a study has been made in an effort to understand the radiation field of thin clouds. The contributions to the intensity and flux from different sources and through different physical processes are calculated by using the method of successive orders of scattering. The effective emissivity of thin clouds is decomposed into the effective absorption emissivity, effective scattering emissivity, and effective reflection emissivity. The effective absorption emissivity depends on the absorption and emission of the cloud; it is parameterized in terms of optical thickness. The effective scattering emissivity depends on the scattering properties of the cloud; it is parameterized in terms of optical thickness and single scattering albedo. The effective reflection emissivity follows the similarity relation as in the near infrared cases. This is parameterized in terms of the similarity parameter and optical thickness, as well as the temperature difference between the cloud and ground.
NASA Astrophysics Data System (ADS)
Rana, R.; Jain, A.; Shankar, A.; Bednarek, D. R.; Rudin, S.
2016-03-01
In radiography, one of the best methods to eliminate image-degrading scatter radiation is the use of anti-scatter grids. However, with high-resolution dynamic imaging detectors, stationary anti-scatter grids can leave grid-line shadows and moiré patterns on the image, depending upon the line density of the grid and the sampling frequency of the x-ray detector. Such artifacts degrade the image quality and may mask small but important details such as small vessels and interventional device features. Appearance of these artifacts becomes increasingly severe as the detector spatial resolution is improved. We have previously demonstrated that, to remove these artifacts by dividing out a reference grid image, one must first subtract the residual scatter that penetrates the grid; however, for objects with anatomic structure, scatter varies throughout the FOV and a spatially differing amount of scatter must be subtracted. In this study, a standard stationary Smit-Rontgen X-ray grid (line density - 70 lines/cm, grid ratio - 13:1) was used with a high-resolution CMOS detector, the Dexela 1207 (pixel size - 75 micron) to image anthropomorphic head phantoms. For a 15 x 15cm FOV, scatter profiles of the anthropomorphic head phantoms were estimated then iteratively modified to minimize the structured noise due to the varying grid-line artifacts across the FOV. Images of the anthropomorphic head phantoms taken with the grid, before and after the corrections, were compared demonstrating almost total elimination of the artifact over the full FOV. Hence, with proper computational tools, antiscatter grid artifacts can be corrected, even during dynamic sequences.
Andresen, Kurt; Jimenez-Useche, Isabel; Howell, Steven C; Yuan, Chongli; Qiu, Xiangyun
2013-01-01
Using a combination of small-angle X-ray scattering (SAXS) and fluorescence resonance energy transfer (FRET) measurements we have determined the role of the H3 and H4 histone tails, independently, in stabilizing the nucleosome DNA terminal ends from unwrapping from the nucleosome core. We have performed solution scattering experiments on recombinant wild-type, H3 and H4 tail-removed mutants and fit all scattering data with predictions from PDB models and compared these experiments to complementary DNA-end FRET experiments. Based on these combined SAXS and FRET studies, we find that while all nucleosomes exhibited DNA unwrapping, the extent of this unwrapping is increased for nucleosomes with the H3 tails removed but, surprisingly, decreased in nucleosomes with the H4 tails removed. Studies of salt concentration effects show a minimum amount of DNA unwrapping for all complexes around 50-100mM of monovalent ions. These data exhibit opposite roles for the positively-charged nucleosome tails, with the ability to decrease access (in the case of the H3 histone) or increase access (in the case of the H4 histone) to the DNA surrounding the nucleosome. In the range of salt concentrations studied (0-200mM KCl), the data point to the H4 tail-removed mutant at physiological (50-100mM) monovalent salt concentration as the mononucleosome with the least amount of DNA unwrapping. PMID:24265699
Andresen, Kurt; Jimenez-Useche, Isabel; Howell, Steven C.; Yuan, Chongli; Qiu, Xiangyun
2013-01-01
Using a combination of small-angle X-ray scattering (SAXS) and fluorescence resonance energy transfer (FRET) measurements we have determined the role of the H3 and H4 histone tails, independently, in stabilizing the nucleosome DNA terminal ends from unwrapping from the nucleosome core. We have performed solution scattering experiments on recombinant wild-type, H3 and H4 tail-removed mutants and fit all scattering data with predictions from PDB models and compared these experiments to complementary DNA-end FRET experiments. Based on these combined SAXS and FRET studies, we find that while all nucleosomes exhibited DNA unwrapping, the extent of this unwrapping is increased for nucleosomes with the H3 tails removed but, surprisingly, decreased in nucleosomes with the H4 tails removed. Studies of salt concentration effects show a minimum amount of DNA unwrapping for all complexes around 50-100mM of monovalent ions. These data exhibit opposite roles for the positively-charged nucleosome tails, with the ability to decrease access (in the case of the H3 histone) or increase access (in the case of the H4 histone) to the DNA surrounding the nucleosome. In the range of salt concentrations studied (0-200mM KCl), the data point to the H4 tail-removed mutant at physiological (50-100mM) monovalent salt concentration as the mononucleosome with the least amount of DNA unwrapping. PMID:24265699
The Influence of Microphysical Cloud Parameterization on Microwave Brightness Temperatures
NASA Technical Reports Server (NTRS)
Skofronick-Jackson, Gail M.; Gasiewski, Albin J.; Wang, James R.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
The microphysical parameterization of clouds and rain-cells plays a central role in atmospheric forward radiative transfer models used in calculating passive microwave brightness temperatures. The absorption and scattering properties of a hydrometeor-laden atmosphere are governed by particle phase, size distribution, aggregate density., shape, and dielectric constant. This study identifies the sensitivity of brightness temperatures with respect to the microphysical cloud parameterization. Cloud parameterizations for wideband (6-410 GHz observations of baseline brightness temperatures were studied for four evolutionary stages of an oceanic convective storm using a five-phase hydrometeor model in a planar-stratified scattering-based radiative transfer model. Five other microphysical cloud parameterizations were compared to the baseline calculations to evaluate brightness temperature sensitivity to gross changes in the hydrometeor size distributions and the ice-air-water ratios in the frozen or partly frozen phase. The comparison shows that, enlarging the rain drop size or adding water to the partly Frozen hydrometeor mix warms brightness temperatures by up to .55 K at 6 GHz. The cooling signature caused by ice scattering intensifies with increasing ice concentrations and at higher frequencies. An additional comparison to measured Convection and Moisture LA Experiment (CAMEX 3) brightness temperatures shows that in general all but, two parameterizations produce calculated T(sub B)'s that fall within the observed clear-air minima and maxima. The exceptions are for parameterizations that, enhance the scattering characteristics of frozen hydrometeors.
Stochastic Convection Parameterizations
NASA Technical Reports Server (NTRS)
Teixeira, Joao; Reynolds, Carolyn; Suselj, Kay; Matheou, Georgios
2012-01-01
computational fluid dynamics, radiation, clouds, turbulence, convection, gravity waves, surface interaction, radiation interaction, cloud and aerosol microphysics, complexity (vegetation, biogeochemistry, radiation versus turbulence/convection stochastic approach, non-linearities, Monte Carlo, high resolutions, large-Eddy Simulations, cloud structure, plumes, saturation in tropics, forecasting, parameterizations, stochastic, radiation-clod interaction, hurricane forecasts
Removal of scattering centers in CBO crystals by the vapor transport equilibration process
NASA Astrophysics Data System (ADS)
Rajesh, D.; Eiro, T.; Yoshimura, M.; Mori, Y.; Jayavel, R.; Sasaki, T.
2008-04-01
Large-size cesium triborate (CsB 3O 5:CBO) crystals were grown from self-flux solutions by top-seeded solution growth. The crystals have numerous scattering centers that were found to depend on the temperature from which the crystals were grown. The weight loss measurements revealed that more weight loss occurred at the growth temperature of 74 mol% B 2O 3. During the cooling process (after growth) there is a possibility of the crystal being shifted to the off-stoichiometric composition because of cesium out-diffusion. To bring the crystals to near stoichiometric solutions, the vapor transport equilibration (VTE) process (post-growth heat treatment) was carried out and the scattering centers were reduced. The cesium atmosphere used in VTE processing was very important for the diffusion of cesium into the crystal and to bring the crystals to near stoichiometric composition.
Parameterization of longwave optical properties for water clouds
NASA Astrophysics Data System (ADS)
Wang, H. Q.; Zhao, G. X.
2002-02-01
Based on relationships between cloud microphysical and optical properties, three different parameterization schemes for narrow and broad band optical properties in longwave region for water clouds have been presented. The effects of different parameterization schemes and different number of broad bands used on cloud radiative properties have been investigated. The effect of scattering role of cloud drops on longwave radiation fluxes and cooling rates in cloudy atmospheres has also been analyzed.
Optimization of parameterized lightpipes
NASA Astrophysics Data System (ADS)
Koshel, R. John
2007-01-01
Parameterization via the bend locus curve allows optimization of single-spherical-bend lightpipes. It takes into account the bend radii, the bend ratio, allowable volume, thickness, and other terms. Parameterization of the lightpipe allows the inclusion of a constrained optimizer to maximize performance of the lightpipe. The simplex method is used for optimization. The standard and optimal simplex methods are used to maximize the standard Lambertian transmission of the lightpipe. A second case presents analogous results when the ray-sample weighted, peak-to-average irradiance uniformity is included with the static Lambertian transmission. These results are compared to a study of the constrained merit space. Results show that both optimizers can locate the optimal solution, but the optimal simplex method accomplishes such with a reduced number of ray-trace evaluations.
Parameterizing the Deceleration Parameter
NASA Astrophysics Data System (ADS)
Pavón, D.; Duran, I.; Del Campo, S.; Herrera, R.
2015-01-01
We propose and constrain with the latest observational data three parameterizations of the deceleration parameter, valid from the matter era to the far future. They are well behaved and do not diverge at any redshift. On the other hand, they are model independent in the sense that in constructing them the only assumption made was that the Universe is homogeneous and isotropic at large scales.
Yoon, Y; Park, M; Kim, H; Kim, K; Kim, J; Morishita, J
2015-06-15
Purpose: This study aims to identify the feasibility of a novel cesium-iodine (CsI)-based flat-panel detector (FPD) for removing scatter radiation in diagnostic radiology. Methods: The indirect FPD comprises three layers: a substrate, scintillation, and thin-film-transistor (TFT) layer. The TFT layer has a matrix structure with pixels. There are ineffective dimensions on the TFT layer, such as the voltage and data lines; therefore, we devised a new FPD system having net-like lead in the substrate layer, matching the ineffective area, to block the scatter radiation so that only primary X-rays could reach the effective dimension.To evaluate the performance of this new FPD system, we conducted a Monte Carlo simulation using MCNPX 2.6.0 software. Scatter fractions (SFs) were acquired using no grid, a parallel grid (8:1 grid ratio), and the new system, and the performances were compared.Two systems having different thicknesses of lead in the substrate layer—10 and 20μm—were simulated. Additionally, we examined the effects of different pixel sizes (153×153 and 163×163μm) on the image quality, while keeping the effective area of pixels constant (143×143μm). Results: In case of 10μm lead, the SFs of the new system (∼11%) were lower than those of the other system (∼27% with no grid, ∼16% with parallel grid) at 40kV. However, as the tube voltage increased, the SF of new system (∼19%) was higher than that of parallel grid (∼18%) at 120kV. In the case of 20μm lead, the SFs of the new system were lower than those of the other systems at all ranges of the tube voltage (40–120kV). Conclusion: The novel CsI-based FPD system for removing scatter radiation is feasible for improving the image contrast but must be optimized with respect to the lead thickness, considering the system’s purposes and the ranges of the tube voltage in diagnostic radiology. This study was supported by a grant(K1422651) from Institute of Health Science, Korea University.
Determinants of seed removal distance by scatter-hoarding rodents in deciduous forests.
Moore, Jeffrey E; McEuen, Amy B; Swihart, Robert K; Contreras, Thomas A; Steele, Michael A
2007-10-01
Scatter-hoarding rodents should space food caches to maximize cache recovery rate (to minimize loss to pilferers) relative to the energetic cost of carrying food items greater distances. Optimization models of cache spacing make two predictions. First, spacing of caches should be greater for food items with greater energy content. Second, the mean distance between caches should increase with food abundance. However, the latter prediction fails to account for the effect of food abundance on the behavior of potential pilferers or on the ability of caching individuals to acquire food by means other than recovering their own caches. When considering these factors, shorter cache distances may be predicted in conditions of higher food abundance. We predicted that seed caching distances would be greater for food items of higher energy content and during lower ambient food abundance and that the effect of seed type on cache distance variation would be lower during higher food abundance. We recorded distances moved for 8636 seeds of five seed types at 15 locations in three forested sites in Pennsylvania, USA, and 29 forest fragments in Indiana, U.S.A., across five different years. Seed production was poor in three years and high in two years. Consistent with previous studies, seeds with greater energy content were moved farther than less profitable food items. Seeds were dispersed less far in seed-rich years than in seed-poor years, contrary to predictions of conventional models. Interactions were important, with seed type effects more evident in seed-poor years. These results suggest that, when food is superabundant, optimal cache distances are more strongly determined by minimizing energy cost of caching than by minimizing pilfering rates and that cache loss rates may be more strongly density-dependent in times of low seed abundance.
Parameterization of the scavenging coefficient for particle scavenging by drops
NASA Astrophysics Data System (ADS)
Fredericks, Steven; Saylor, J. R.
2014-11-01
The removal of particles by drops occurs in many environmentally relevant scenarios such as particle fallout from rain, as well as in many industrial applications such as sprays for dust control in mines. In applications like these the ability of a drop to scavenge a particle is quantified by the scavenging coefficient, E, which is the fraction of particles removed. Though the physics controlling particle scavenging by drops suggests that E is controlled by several dimensionless groups, E is typically correlated to just the Stokes number. A survey of published experimental data shows significant scatter in plots of E versus the Stokes number, occasionally exceeding three orders of magnitude. There is also a large discrepancy between the published theories for E. A parameterization study was conducted to ascertain if and how inclusion of other dimensionless groups could better collapse the extant data for E and the results of that study are presented in this talk. Brief mention will also be made of recent experiments by the authors where E was measured for a liquid drop suspended in an ultrasonic standing wave field, where the drop diameter and gas velocity can be independently varied unlike the more typical experiments where these quantities are coupled.
[Characteristics and Parameterization for Atmospheric Extinction Coefficient in Beijing].
Chen, Yi-na; Zhao, Pu-sheng; He, Di; Dong, Fan; Zhao, Xiu-juan; Zhang, Xiao-ling
2015-10-01
In order to study the characteristics of atmospheric extinction coefficient in Beijing, systematic measurements had been carried out for atmospheric visibility, PM2.5 concentration, scattering coefficient, black carbon, reactive gases, and meteorological parameters from 2013 to 2014. Based on these data, we compared some published fitting schemes of aerosol light scattering enhancement factor [ f(RH)], and discussed the characteristics and the key influence factors for atmospheric extinction coefficient. Then a set of parameterization models of atmospheric extinction coefficient for different seasons and different polluted levels had been established. The results showed that aerosol scattering accounted for more than 94% of total light extinction. In the summer and autumn, the aerosol hygroscopic growth caused by high relative humidity had increased the aerosol scattering coefficient by 70 to 80 percent. The parameterization models could reflect the influencing mechanism of aerosol and relative humidity upon ambient light extinction, and describe the seasonal variations of aerosol light extinction ability. PMID:26841588
[Characteristics and Parameterization for Atmospheric Extinction Coefficient in Beijing].
Chen, Yi-na; Zhao, Pu-sheng; He, Di; Dong, Fan; Zhao, Xiu-juan; Zhang, Xiao-ling
2015-10-01
In order to study the characteristics of atmospheric extinction coefficient in Beijing, systematic measurements had been carried out for atmospheric visibility, PM2.5 concentration, scattering coefficient, black carbon, reactive gases, and meteorological parameters from 2013 to 2014. Based on these data, we compared some published fitting schemes of aerosol light scattering enhancement factor [ f(RH)], and discussed the characteristics and the key influence factors for atmospheric extinction coefficient. Then a set of parameterization models of atmospheric extinction coefficient for different seasons and different polluted levels had been established. The results showed that aerosol scattering accounted for more than 94% of total light extinction. In the summer and autumn, the aerosol hygroscopic growth caused by high relative humidity had increased the aerosol scattering coefficient by 70 to 80 percent. The parameterization models could reflect the influencing mechanism of aerosol and relative humidity upon ambient light extinction, and describe the seasonal variations of aerosol light extinction ability.
Parameterization of solar cells
NASA Astrophysics Data System (ADS)
Appelbaum, J.; Chait, A.; Thompson, D.
1992-10-01
The aggregation (sorting) of the individual solar cells into an array is commonly based on a single operating point on the current-voltage (I-V) characteristic curve. An alternative approach for cell performance prediction and cell screening is provided by modeling the cell using an equivalent electrical circuit, in which the parameters involved are related to the physical phenomena in the device. These analytical models may be represented by a double exponential I-V characteristic with seven parameters, by a double exponential model with five parameters, or by a single exponential equation with four or five parameters. In this article we address issues concerning methodologies for the determination of solar cell parameters based on measured data points of the I-V characteristic, and introduce a procedure for screening of solar cells for arrays. We show that common curve fitting techniques, e.g., least squares, may produce many combinations of parameter values while maintaining a good fit between the fitted and measured I-V characteristics of the cell. Therefore, techniques relying on curve fitting criteria alone cannot be directly used for cell parameterization. We propose a consistent procedure which takes into account the entire set of parameter values for a batch of cells. This procedure is based on a definition of a mean cell representing the batch, and takes into account the relative contribution of each parameter to the overall goodness of fit. The procedure is demonstrated on a batch of 50 silicon cells for Space Station Freedom.
Parameterization of solar cells
NASA Technical Reports Server (NTRS)
Appelbaum, J.; Chait, A.; Thompson, D.
1992-01-01
The aggregation (sorting) of the individual solar cells into an array is commonly based on a single operating point on the current-voltage (I-V) characteristic curve. An alternative approach for cell performance prediction and cell screening is provided by modeling the cell using an equivalent electrical circuit, in which the parameters involved are related to the physical phenomena in the device. These analytical models may be represented by a double exponential I-V characteristic with seven parameters, by a double exponential model with five parameters, or by a single exponential equation with four or five parameters. In this article we address issues concerning methodologies for the determination of solar cell parameters based on measured data points of the I-V characteristic, and introduce a procedure for screening of solar cells for arrays. We show that common curve fitting techniques, e.g., least squares, may produce many combinations of parameter values while maintaining a good fit between the fitted and measured I-V characteristics of the cell. Therefore, techniques relying on curve fitting criteria alone cannot be directly used for cell parameterization. We propose a consistent procedure which takes into account the entire set of parameter values for a batch of cells. This procedure is based on a definition of a mean cell representing the batch, and takes into account the relative contribution of each parameter to the overall goodness of fit. The procedure is demonstrated on a batch of 50 silicon cells for Space Station Freedom.
A Flexible Parameterization for Shortwave Optical Properties of Ice Crystals
NASA Technical Reports Server (NTRS)
VanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Cairns, Brian; Fridlind, Ann M.
2014-01-01
A parameterization is presented that provides extinction cross section sigma (sub e), single-scattering albedo omega, and asymmetry parameter (g) of ice crystals for any combination of volume, projected area, aspect ratio, and crystal distortion at any wavelength in the shortwave. Similar to previous parameterizations, the scheme makes use of geometric optics approximations and the observation that optical properties of complex, aggregated ice crystals can be well approximated by those of single hexagonal crystals with varying size, aspect ratio, and distortion levels. In the standard geometric optics implementation used here, sigma (sub e) is always twice the particle projected area. It is shown that omega is largely determined by the newly defined absorption size parameter and the particle aspect ratio. These dependences are parameterized using a combination of exponential, lognormal, and polynomial functions. The variation of (g) with aspect ratio and crystal distortion is parameterized for one reference wavelength using a combination of several polynomials. The dependences of g on refractive index and omega are investigated and factors are determined to scale the parameterized (g) to provide values appropriate for other wavelengths. The parameterization scheme consists of only 88 coefficients. The scheme is tested for a large variety of hexagonal crystals in several wavelength bands from 0.2 to 4 micron, revealing absolute differences with reference calculations of omega and (g) that are both generally below 0.015. Over a large variety of cloud conditions, the resulting root-mean-squared differences with reference calculations of cloud reflectance, transmittance, and absorptance are 1.4%, 1.1%, and 3.4%, respectively. Some practical applications of the parameterization in atmospheric models are highlighted.
Summary of Cumulus Parameterization Workshop
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh
2002-01-01
A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.
NASA Astrophysics Data System (ADS)
Su, Jing-Wei; Hsu, Wei-Chen; Tjiu, Jeng-Wei; Chiang, Chun-Pin; Huang, Chao-Wei; Sung, Kung-Bin
2014-07-01
The scattering properties and refractive indices (RI) of tissue are important parameters in tissue optics. These parameters can be determined from quantitative phase images of thin slices of tissue blocks. However, the changes in RI and structure of cells due to fixation and paraffin embedding might result in inaccuracies in the estimation of the scattering properties of tissue. In this study, three-dimensional RI distributions of cells were measured using digital holographic microtomography to obtain total scattering cross sections (TSCS) of the cells based on the first-order Born approximation. We investigated the slight loss of dry mass and drastic shrinkage of cells due to paraformaldehyde fixation and paraffin embedding removal processes. We propose a method to compensate for the correlated changes in volume and RI of cells. The results demonstrate that the TSCS of live cells can be estimated using restored cells. The percentage deviation of the TSCS between restored cells and live cells was only -8%. Spatially resolved RI and scattering coefficients of unprocessed oral epithelium ranged from 1.35 to 1.39 and from 100 to 450 cm-1, respectively, estimated from paraffin-embedded oral epithelial tissue after restoration of RI and volume.
Su, Jing-Wei; Hsu, Wei-Chen; Tjiu, Jeng-Wei; Chiang, Chun-Pin; Huang, Chao-Wei; Sung, Kung-Bin
2014-01-01
The scattering properties and refractive indices (RI) of tissue are important parameters in tissue optics. These parameters can be determined from quantitative phase images of thin slices of tissue blocks. However, the changes in RI and structure of cells due to fixation and paraffin embedding might result in inaccuracies in the estimation of the scattering properties of tissue. In this study, three-dimensional RI distributions of cells were measured using digital holographic microtomography to obtain total scattering cross sections (TSCS) of the cells based on the first-order Born approximation. We investigated the slight loss of dry mass and drastic shrinkage of cells due to paraformaldehyde fixation and paraffin embedding removal processes. We propose a method to compensate for the correlated changes in volume and RI of cells. The results demonstrate that the TSCS of live cells can be estimated using restored cells. The percentage deviation of the TSCS between restored cells and live cells was only −8%. Spatially resolved RI and scattering coefficients of unprocessed oral epithelium ranged from 1.35 to 1.39 and from 100 to 450 cm−1, respectively, estimated from paraffinembedded oral epithelial tissue after restoration of RI and volume.
Su, Jing-Wei; Hsu, Wei-Chen; Tjiu, Jeng-Wei; Chiang, Chun-Pin; Huang, Chao-Wei; Sung, Kung-Bin
2014-01-01
The scattering properties and refractive indices (RI) of tissue are important parameters in tissue optics. These parameters can be determined from quantitative phase images of thin slices of tissue blocks. However, the changes in RI and structure of cells due to fixation and paraffin embedding might result in inaccuracies in the estimation of the scattering properties of tissue. In this study, three-dimensional RI distributions of cells were measured using digital holographic microtomography to obtain total scattering cross sections (TSCS) of the cells based on the first-order Born approximation. We investigated the slight loss of dry mass and drastic shrinkage of cells due to paraformaldehyde fixation and paraffin embedding removal processes. We propose a method to compensate for the correlated changes in volume and RI of cells. The results demonstrate that the TSCS of live cells can be estimated using restored cells. The percentage deviation of the TSCS between restored cells and live cells was only −8%. Spatially resolved RI and scattering coefficients of unprocessed oral epithelium ranged from 1.35 to 1.39 and from 100 to 450 cm−1, respectively, estimated from paraffinembedded oral epithelial tissue after restoration of RI and volume. PMID:25069007
NASA Technical Reports Server (NTRS)
Hong, Byungsik; Maung, Khin Maung; Wilson, John W.; Buck, Warren W.
1989-01-01
The derivations of the Lippmann-Schwinger equation and Watson multiple scattering are given. A simple optical potential is found to be the first term of that series. The number density distribution models of the nucleus, harmonic well, and Woods-Saxon are used without t-matrix taken from the scattering experiments. The parameterized two-body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to the imaginary part of the forward elastic scattering amplitude, are presented. The eikonal approximation was chosen as our solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.
Parameterized Beyond-Einstein Growth
Linder, Eric; Linder, Eric V.; Cahn, Robert N.
2007-09-17
A single parameter, the gravitational growth index gamma, succeeds in characterizing the growth of density perturbations in the linear regime separately from the effects of the cosmic expansion. The parameter is restricted to a very narrow range for models of dark energy obeying the laws of general relativity but can take on distinctly different values in models of beyond-Einstein gravity. Motivated by the parameterized post-Newtonian (PPN) formalism for testing gravity, we analytically derive and extend the gravitational growth index, or Minimal Modified Gravity, approach to parameterizing beyond-Einstein cosmology. The analytic formalism demonstrates how to apply the growth index parameter to early dark energy, time-varying gravity, DGP braneworld gravity, and some scalar-tensor gravity.
A Novel Shape Parameterization Approach
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper presents a novel parameterization approach for complex shapes suitable for a multidisciplinary design optimization application. The approach consists of two basic concepts: (1) parameterizing the shape perturbations rather than the geometry itself and (2) performing the shape deformation by means of the soft objects animation algorithms used in computer graphics. Because the formulation presented in this paper is independent of grid topology, we can treat computational fluid dynamics and finite element grids in a similar manner. The proposed approach is simple, compact, and efficient. Also, the analytical sensitivity derivatives are easily computed for use in a gradient-based optimization. This algorithm is suitable for low-fidelity (e.g., linear aerodynamics and equivalent laminated plate structures) and high-fidelity analysis tools (e.g., nonlinear computational fluid dynamics and detailed finite element modeling). This paper contains the implementation details of parameterizing for planform, twist, dihedral, thickness, and camber. The results are presented for a multidisciplinary design optimization application consisting of nonlinear computational fluid dynamics, detailed computational structural mechanics, performance, and a simple propulsion module.
Fu, Q.; Sun, W.B.; Yang, P.
1998-09-01
An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (D{sub ge}). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is {approximately}2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.
NASA Astrophysics Data System (ADS)
Fu, Qiang; Yang, Ping; Sun, W. B.
1998-09-01
An accurate parameterization is presented for the infrared radiative properties of cirrus clouds. For the single-scattering calculations, a composite scheme is developed for randomly oriented hexagonal ice crystals by comparing results from Mie theory, anomalous diffraction theory (ADT), the geometric optics method (GOM), and the finite-difference time domain technique. This scheme employs a linear combination of single-scattering properties from the Mie theory, ADT, and GOM, which is accurate for a wide range of size parameters. Following the approach of Q. Fu, the extinction coefficient, absorption coefficient, and asymmetry factor are parameterized as functions of the cloud ice water content and generalized effective size (Dge). The present parameterization of the single-scattering properties of cirrus clouds is validated by examining the bulk radiative properties for a wide range of atmospheric conditions. Compared with reference results, the typical relative error in emissivity due to the parameterization is 2.2%. The accuracy of this parameterization guarantees its reliability in applications to climate models. The present parameterization complements the scheme for the solar radiative properties of cirrus clouds developed by Q. Fu for use in numerical models.
A Thermal Infrared Radiation Parameterization for Atmospheric Studies
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J.; Liang, Xin-Zhong; Yan, Michael M.-H.; Cote, Charles (Technical Monitor)
2001-01-01
This technical memorandum documents the longwave radiation parameterization developed at the Climate and Radiation Branch, NASA Goddard Space Flight Center, for a wide variety of weather and climate applications. Based on the 1996-version of the Air Force Geophysical Laboratory HITRAN data, the parameterization includes the absorption due to major gaseous absorption (water vapor, CO2, O3) and most of the minor trace gases (N2O, CH4, CFCs), as well as clouds and aerosols. The thermal infrared spectrum is divided into nine bands. To achieve a high degree of accuracy and speed, various approaches of computing the transmission function are applied to different spectral bands and gases. The gaseous transmission function is computed either using the k-distribution method or the table look-up method. To include the effect of scattering due to clouds and aerosols, the optical thickness is scaled by the single-scattering albedo and asymmetry factor. The parameterization can accurately compute fluxes to within 1% of the high spectral-resolution line-by-line calculations. The cooling rate can be accurately computed in the region extending from the surface to the 0.01-hPa level.
Quantum Consequences of Parameterizing Geometry
NASA Astrophysics Data System (ADS)
Wanas, M. I.
2002-12-01
The marriage between geometrization and quantization is not successful, so far. It is well known that quantization of gravity , using known quantization schemes, is not satisfactory. It may be of interest to look for another approach to this problem. Recently, it is shown that geometries with torsion admit quantum paths. Such geometries should be parameterizied in order to preserve the quantum properties appeared in the paths. The present work explores the consequences of parameterizing such geometry. It is shown that quantum properties, appeared in the path equations, are transferred to other geometric entities.
NASA Technical Reports Server (NTRS)
Hong, Byungsik; Buck, Warren W.; Maung, Khin M.
1989-01-01
Two kinds of number density distributions of the nucleus, harmonic well and Woods-Saxon models, are used with the t-matrix that is taken from the scattering experiments to find a simple optical potential. The parameterized two body inputs, which are kaon-nucleon total cross sections, elastic slope parameters, and the ratio of the real to imaginary part of the forward elastic scattering amplitude, are shown. The eikonal approximation was chosen as the solution method to estimate the total and absorptive cross sections for the kaon-nucleus scattering.
NASA Astrophysics Data System (ADS)
Smith, Helen R.; Baran, Anthony J.; Hesse, Evelyn; Hill, Peter G.; Connolly, Paul J.; Webb, Ann
2016-11-01
A single habit parameterization for the shortwave optical properties of cirrus is presented. The parameterization utilizes a hollow particle geometry, with stepped internal cavities as identified in laboratory and field studies. This particular habit was chosen as both experimental and theoretical results show that the particle exhibits lower asymmetry parameters when compared to solid crystals of the same aspect ratio. The aspect ratio of the particle was varied as a function of maximum dimension, D, in order to adhere to the same physical relationships assumed in the microphysical scheme in a configuration of the Met Office atmosphere-only global model, concerning particle mass, size and effective density. Single scattering properties were then computed using T-Matrix, Ray Tracing with Diffraction on Facets (RTDF) and Ray Tracing (RT) for small, medium, and large size parameters respectively. The scattering properties were integrated over 28 particle size distributions as used in the microphysical scheme. The fits were then parameterized as simple functions of Ice Water Content (IWC) for 6 shortwave bands. The parameterization was implemented into the GA6 configuration of the Met Office Unified Model along with the current operational long-wave parameterization. The GA6 configuration is used to simulate the annual twenty-year short-wave (SW) fluxes at top-of-atmosphere (TOA) and also the temperature and humidity structure of the atmosphere. The parameterization presented here is compared against the current operational model and a more recent habit mixture model.
Infrared radiation parameterizations in numerical climate models
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Kratz, David P.; Ridgway, William
1991-01-01
This study presents various approaches to parameterizing the broadband transmission functions for utilization in numerical climate models. One-parameter scaling is applied to approximate a nonhomogeneous path with an equivalent homogeneous path, and the diffuse transmittances are either interpolated from precomputed tables or fit by analytical functions. Two-parameter scaling is applied to parameterizing the carbon dioxide and ozone transmission functions in both the lower and middle atmosphere. Parameterizations are given for the nitrous oxide and methane diffuse transmission functions.
Parameterization of precipitating shallow convection
NASA Astrophysics Data System (ADS)
Seifert, Axel
2015-04-01
Shallow convective clouds play a decisive role in many regimes of the atmosphere. They are abundant in the trade wind regions and essential for the radiation budget in the sub-tropics. They are also an integral part of the diurnal cycle of convection over land leading to the formation of deeper modes of convection later on. Errors in the representation of these small and seemingly unimportant clouds can lead to misforecasts in many situations. Especially for high-resolution NWP models at 1-3 km grid spacing which explicitly simulate deeper modes of convection, the parameterization of the sub-grid shallow convection is an important issue. Large-eddy simulations (LES) can provide the data to study shallow convective clouds and their interaction with the boundary layer in great detail. In contrast to observation, simulations provide a complete and consistent dataset, which may not be perfectly realistic due to the necessary simplifications, but nevertheless enables us to study many aspects of those clouds in a self-consistent way. Today's supercomputing capabilities make it possible to use domain sizes that not only span several NWP grid boxes, but also allow for mesoscale self-organization of the cloud field, which is an essential behavior of precipitating shallow convection. By coarse-graining the LES data to the grid of an NWP model, the sub-grid fluctuations caused by shallow convective clouds can be analyzed explicitly. These fluctuations can then be parameterized in terms of a PDF-based closure. The necessary choices for such schemes like the shape of the PDF, the number of predicted moments, etc., will be discussed. For example, it is shown that a universal three-parameter distribution of total water may exist at scales of O(1 km) but not at O(10 km). In a next step the variance budgets of moisture and temperature in the cloud-topped boundary layer are studied. What is the role and magnitude of the microphysical correlation terms in these equations, which
Parameterization of solar flare dose
Lamarche, A.H.; Poston, J.W.
1996-12-31
A critical aspect of missions to the moon or Mars will be the safety and health of the crew. Radiation in space is a hazard for astronauts, especially high-energy radiation following certain types of solar flares. A solar flare event can be very dangerous if astronauts are not adequately shielded because flares can deliver a very high dose in a short period of time. The goal of this research was to parameterize solar flare dose as a function of time to see if it was possible to predict solar flare occurrence, thus providing a warning time. This would allow astronauts to take corrective action and avoid receiving a dose greater than the recommended limit set by the National Council on Radiation Protection and Measurements (NCRP).
New Approaches to Parameterizing Convection
NASA Technical Reports Server (NTRS)
Randall, David A.; Lappen, Cara-Lyn
1999-01-01
Many general circulation models (GCMs) currently use separate schemes for planetary boundary layer (PBL) processes, shallow and deep cumulus (Cu) convection, and stratiform clouds. The conventional distinctions. among these processes are somewhat arbitrary. For example, in the stratocumulus-to-cumulus transition region, stratocumulus clouds break up into a combination of shallow cumulus and broken stratocumulus. Shallow cumulus clouds may be considered to reside completely within the PBL, or they may be regarded as starting in the PBL but terminating above it. Deeper cumulus clouds often originate within the PBL with also can originate aloft. To the extent that our models separately parameterize physical processes which interact strongly on small space and time scales, the currently fashionable practice of modularization may be doing more harm than good.
NASA Astrophysics Data System (ADS)
Joseph, Everette David
1997-12-01
An interactive cirrus cloud radiative parameterization for global climate models is developed and applied. Specifically, a parameterization is presented that predicts the solar cloud optical depth, single scattering albedo and asymmetry factor in terms of cloud effective particle diameter and ice water content. A simple parameterization is developed to predict the infrared cloud emissivity in terms of effective particle diameter and ice water content. Both the solar and infrared parameterizations derive from analytical solutions that treat cirrus cloud particles as hexagonal ice crystals. The cloud microphysical properties, cloud ice content and effective particle diameter, are parameterized in terms of cloud temperature. This interactive cirrus cloud radiative parameterization is incorporated into the NCAR/SUNYA GENESIS atmospheric general circulation model and evaluated in model-to- observation comparisons with a comprehensive set of cloud and radiation data derived from space-based and surface- based measurements obtained during the April 1994 Intensive Observation Period of the Atmospheric Radiation Measurement program. It is shown that the model simulates more realistic solar and infrared radiation incident at the surface with the new cirrus parameterization than with the old. In particular, biases in simulated solar direct and diffuse fluxes are reduced by 60% and 30%, respectively, and that in simulated infrared flux is reduced by 40%. The potential climatic impact of the new cirrus parameterization in a full simulation of the general circulation model is evaluated through instantaneous radiative forcing experiments. The new cirrus parameterization reduces the global annual mean forcing of the surface-atmosphere system by 2.26 Wm-2. This reduction in forcing occurs mainly in the upper troposphere and is dominated by the loss of solar energy in the high latitudes of the summer hemisphere and loss of infrared energy in the tropics during both winter and summer
An approach for parameterizing mesoscale precipitating systems
Weissbluth, M.J.; Cotton, W.R.
1991-01-01
A cumulus parameterization laboratory has been described which uses a reference numerical model to fabricate, calibrate and verify a cumulus parameterization scheme suitable for use in mesoscale models. Key features of this scheme include resolution independence and the ability to provide hydrometeor source functions to the host model. Thus far, only convective scale drafts have been parameterized, limiting the use of the scheme to those models which can resolve the mesoscale circulations. As it stands, the scheme could probably be incorporated into models having a grid resolution greater than 50 km with results comparable to the existing schemes for the large-scale models. We propose, however, to quantify the mesoscale circulations through the use of the cumulus parameterization laboratory. The inclusion of these mesoscale drafts in the existing scheme will hopefully allow the correct parameterization of the organized mesoscale precipitating systems.
An approach for parameterizing mesoscale precipitating systems
Weissbluth, M.J.; Cotton, W.R.
1991-12-31
A cumulus parameterization laboratory has been described which uses a reference numerical model to fabricate, calibrate and verify a cumulus parameterization scheme suitable for use in mesoscale models. Key features of this scheme include resolution independence and the ability to provide hydrometeor source functions to the host model. Thus far, only convective scale drafts have been parameterized, limiting the use of the scheme to those models which can resolve the mesoscale circulations. As it stands, the scheme could probably be incorporated into models having a grid resolution greater than 50 km with results comparable to the existing schemes for the large-scale models. We propose, however, to quantify the mesoscale circulations through the use of the cumulus parameterization laboratory. The inclusion of these mesoscale drafts in the existing scheme will hopefully allow the correct parameterization of the organized mesoscale precipitating systems.
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties pre-computed with an improved geometric optics method, the bulk absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the effective particle size of a mixture of ice habits, the ice water amount, and spectral band. The parameterization has been applied to computing fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. It is found that flux calculations are not overly sensitive to the assumed particle habits if the definition of the effective particle size is consistent with the particle habits that the parameterization is based. Otherwise, the error in the flux calculations could reach a magnitude unacceptable for climate studies. Different from many previous studies, the parameterization requires only an effective particle size representing all ice habits in a cloud layer, but not the effective size of individual ice habits.
Parameterized Linear Longitudinal Airship Model
NASA Technical Reports Server (NTRS)
Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph
2010-01-01
A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics
A Two-Habit Ice Cloud Optical Property Parameterization for GCM Application
NASA Technical Reports Server (NTRS)
Yi, Bingqi; Yang, Ping; Minnis, Patrick; Loeb, Norman; Kato, Seiji
2014-01-01
We present a novel ice cloud optical property parameterization based on a two-habit ice cloud model that has been proved to be optimal for remote sensing applications. The two-habit ice model is developed with state-of-the-art numerical methods for light scattering property calculations involving individual columns and column aggregates with the habit fractions constrained by in-situ measurements from various field campaigns. Band-averaged bulk ice cloud optical properties including the single-scattering albedo, the mass extinction/absorption coefficients, and the asymmetry factor are parameterized as functions of the effective particle diameter for the spectral bands involved in the broadband radiative transfer models. Compared with other parameterization schemes, the two-habit scheme generally has lower asymmetry factor values (around 0.75 at the visible wavelengths). The two-habit parameterization scheme was widely tested with the broadband radiative transfer models (i.e. Rapid Radiative Transfer Model, GCM version) and global circulation models (GCMs, i.e. Community Atmosphere Model, version 5). Global ice cloud radiative effects at the top of the atmosphere are also analyzed from the GCM simulation using the two-habit parameterization scheme in comparison with CERES satellite observations.
Swept Volume Parameterization for Isogeometric Analysis
NASA Astrophysics Data System (ADS)
Aigner, M.; Heinrich, C.; Jüttler, B.; Pilgerstorfer, E.; Simeon, B.; Vuong, A.-V.
Isogeometric Analysis uses NURBS representations of the domain for performing numerical simulations. The first part of this paper presents a variational framework for generating NURBS parameterizations of swept volumes. The class of these volumes covers a number of interesting free-form shapes, such as blades of turbines and propellers, ship hulls or wings of airplanes. The second part of the paper reports the results of isogeometric analysis which were obtained with the help of the generated NURBS volume parameterizations. In particular we discuss the influence of the chosen parameterization and the incorporation of boundary conditions.
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Lee, Kyu-Tae; Yang, Ping; Lau, William K. M. (Technical Monitor)
2002-01-01
Based on the single-scattering optical properties that are pre-computed using an improve geometric optics method, the bulk mass absorption coefficient, single-scattering albedo, and asymmetry factor of ice particles have been parameterized as a function of the mean effective particle size of a mixture of ice habits. The parameterization has been applied to compute fluxes for sample clouds with various particle size distributions and assumed mixtures of particle habits. Compared to the parameterization for a single habit of hexagonal column, the solar heating of clouds computed with the parameterization for a mixture of habits is smaller due to a smaller cosingle-scattering albedo. Whereas the net downward fluxes at the TOA and surface are larger due to a larger asymmetry factor. The maximum difference in the cloud heating rate is approx. 0.2 C per day, which occurs in clouds with an optical thickness greater than 3 and the solar zenith angle less than 45 degrees. Flux difference is less than 10 W per square meters for the optical thickness ranging from 0.6 to 10 and the entire range of the solar zenith angle. The maximum flux difference is approximately 3%, which occurs around an optical thickness of 1 and at high solar zenith angles.
Methods of testing parameterizations: Vertical ocean mixing
NASA Technical Reports Server (NTRS)
Tziperman, Eli
1992-01-01
The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the
Toward a macroscopic parameterization of iceberg calving
NASA Astrophysics Data System (ADS)
Amundson, J. M.
2014-12-01
Parameterization of iceberg calving for prognostic glacier and ice sheet models remains a major challenge due to a poor understanding of the physical processes governing calving. Here, I propose a semi-empirical, macroscopic parameterization of calving that ignores the complex physics of the glacier-ocean interface, can be applied to any calving margin, and is easy to implement with very little computational cost. To test the parameterization, I apply it to a one-dimensional flowline model of an Alaskan-style tidewater glacier and subject the model to various climatic forcings. The model produces results that are roughly consistent with observations, i.e., rapid retreat and flow acceleration through an overdeepening over decades and slow re-advance over millenia. Model results are compared to the previously proposed water depth, height above flotation, and crevasse-depth calving parameterizations to show that they are consistent with the macroscopic parameterization under certain conditions. Although there remains a great deal of uncertainty in the exact form of the macroscopic parameterization, it does appear to be a promising and simple way to model the glacier-ocean boundary.
Optical closure of parameterized bio-optical relationships
NASA Astrophysics Data System (ADS)
He, Shuangyan; Fischer, Jürgen; Schaale, Michael; He, Ming-xia
2014-03-01
An optical closure study on bio-optical relationships was carried out using radiative transfer model matrix operator method developed by Freie Universität Berlin. As a case study, the optical closure of bio-optical relationships empirically parameterized with in situ data for the East China Sea was examined. Remote-sensing reflectance ( R rs) was computed from the inherent optical properties predicted by these biooptical relationships and compared with published in situ data. It was found that the simulated R rs was overestimated for turbid water. To achieve optical closure, bio-optical relationships for absorption and scattering coefficients for suspended particulate matter were adjusted. Furthermore, the results show that the Fournier and Forand phase functions obtained from the adjusted relationships perform better than the Petzold phase function. Therefore, before bio-optical relationships are used for a local sea area, the optical closure should be examined.
Parameterization of lattice spacings for lipid multilayers in ionic solutions
NASA Astrophysics Data System (ADS)
Petrache, Horia; Johnson, Merrell; Harries, Daniel; Seifert, Soenke
Lipids, which are molecules found in biological cells, form highly regular layered structures called multilamellar lipid vesicles (MLVs). The repeat lattice spacings of MLVs depend on van der Waals and electrostatic forces between neighboring membranes and are sensitive to the presence of salt. For example, addition of salt ions such as sodium and potassium makes the MLVs swell, primarily due to changes in electrical polarizabilities. However, a more complicated behavior is found in some ionic solutions such as those containing lithium ions. Using x-ray scattering, we show experimentally how the interactions between membranes depend on the type of monovalent ions and construct parameterizations of MLVs swelling curves that can help analyze van der Waals interactions.
Parameterization of the three-dimensional room transfer function in horizontal plane.
Bu, Bing; Abhayapala, Thushara D; Bao, Chang-chun; Zhang, Wen
2015-09-01
This letter proposes an efficient parameterization of the three-dimensional room transfer function (RTF) which is robust for the position variations of source and receiver in respective horizontal planes. Based on azimuth harmonic analysis, the proposed method exploits the underlying properties of the associated Legendre functions to remove a portion of the spherical harmonic coefficients of RTF which have no contribution in the horizontal plane. This reduction leads to a flexible measuring-point structure consisting of practical concentric circular arrays to extract horizontal plane RTF coefficients. The accuracy of the above parameterization is verified through numerical simulations. PMID:26428827
A parameterization of cloud droplet nucleation
Ghan, S.J.; Chuang, C.C.; Penner, J.E.
1994-01-01
Droplet nucleation is a fundamental cloud process. The number of aerosols activated to form cloud droplets influences not only the number of aerosols scavenged by clouds but also the size of the cloud droplets. Cloud droplet size influences the cloud albedo and the conversion of cloud water to precipitation. Global aerosol models are presently being developed with the intention of coupling with global atmospheric circulation models to evaluate the influence of aerosols and aerosol-cloud interactions on climate. If these and other coupled models are to address issues of aerosol-interactions, the droplet nucleation process must be adequately represented. Ghan et al. have introduced a droplet nucleation parameterization for a single aerosol type that offers certain advantages over the popular Twomey parameterization. Here we describe the generalization of that parameterization to the case of multiple aerosol types, with estimation of aerosol mass as well as number activated.
POET: Parameterized Optimization for Empirical Tuning
Yi, Q; Seymour, K; You, H; Vuduc, R; Quinlan, D
2007-01-29
The excessive complexity of both machine architectures and applications have made it difficult for compilers to statically model and predict application behavior. This observation motivates the recent interest in performance tuning using empirical techniques. We present a new embedded scripting language, POET (Parameterized Optimization for Empirical Tuning), for parameterizing complex code transformations so that they can be empirically tuned. The POET language aims to significantly improve the generality, flexibility, and efficiency of existing empirical tuning systems. We have used the language to parameterize and to empirically tune three loop optimizations-interchange, blocking, and unrolling-for two linear algebra kernels. We show experimentally that the time required to tune these optimizations using POET, which does not require any program analysis, is significantly shorter than that when using a full compiler-based source-code optimizer which performs sophisticated program analysis and optimizations.
Approaches for Subgrid Parameterization: Does Scaling Help?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-04-01
Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode
A uniform parameterization of moment tensors
NASA Astrophysics Data System (ADS)
Tape, C.; Tape, W.
2015-12-01
A moment tensor is a 3 x 3 symmetric matrix that expresses an earthquake source. We construct a parameterization of the five-dimensional space of all moment tensors of unit norm. The coordinates associated with the parameterization are closely related to moment tensor orientations and source types. The parameterization is uniform, in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. Uniformly distributed points in the coordinate domain therefore give uniformly distributed moment tensors. A cartesian grid in the coordinate domain can be used to search efficiently over moment tensors. We find that uniformly distributed moment tensors have uniformly distributed orientations (eigenframes), but that their source types (eigenvalue triples) are distributed so as to favor double couples. An appropriate choice of a priori moment tensor probability is a prerequisite for parameter estimation. As a seemingly sensible choice, we consider the homogeneous probability, in which equal volumes of moment tensors are equally likely. We believe that it will lead to improved characterization of source processes.
Parameterizing cloud condensation nuclei concentrations during HOPE
NASA Astrophysics Data System (ADS)
Hande, Luke B.; Engler, Christa; Hoose, Corinna; Tegen, Ina
2016-09-01
An aerosol model was used to simulate the generation and transport of aerosols over Germany during the HD(CP)2 Observational Prototype Experiment (HOPE) field campaign of 2013. The aerosol number concentrations and size distributions were evaluated against observations, which shows satisfactory agreement in the magnitude and temporal variability of the main aerosol contributors to cloud condensation nuclei (CCN) concentrations. From the modelled aerosol number concentrations, number concentrations of CCN were calculated as a function of vertical velocity using a comprehensive aerosol activation scheme which takes into account the influence of aerosol chemical and physical properties on CCN formation. There is a large amount of spatial variability in aerosol concentrations; however the resulting CCN concentrations vary significantly less over the domain. Temporal variability is large in both aerosols and CCN. A parameterization of the CCN number concentrations is developed for use in models. The technique involves defining a number of best fit functions to capture the dependence of CCN on vertical velocity at different pressure levels. In this way, aerosol chemical and physical properties as well as thermodynamic conditions are taken into account in the new CCN parameterization. A comparison between the parameterization and the CCN estimates from the model data shows excellent agreement. This parameterization may be used in other regions and time periods with a similar aerosol load; furthermore, the technique demonstrated here may be employed in regions dominated by different aerosol species.
Empirical parameterization of setup, swash, and runup
Stockdon, H.F.; Holman, R.A.; Howd, P.A.; Sallenger, A.H.
2006-01-01
Using shoreline water-level time series collected during 10 dynamically diverse field experiments, an empirical parameterization for extreme runup, defined by the 2% exceedence value, has been developed for use on natural beaches over a wide range of conditions. Runup, the height of discrete water-level maxima, depends on two dynamically different processes; time-averaged wave setup and total swash excursion, each of which is parameterized separately. Setup at the shoreline was best parameterized using a dimensional form of the more common Iribarren-based setup expression that includes foreshore beach slope, offshore wave height, and deep-water wavelength. Significant swash can be decomposed into the incident and infragravity frequency bands. Incident swash is also best parameterized using a dimensional form of the Iribarren-based expression. Infragravity swash is best modeled dimensionally using offshore wave height and wavelength and shows no statistically significant linear dependence on either foreshore or surf-zone slope. On infragravity-dominated dissipative beaches, the magnitudes of both setup and swash, modeling both incident and infragravity frequency components together, are dependent only on offshore wave height and wavelength. Statistics of predicted runup averaged over all sites indicate a - 17 cm bias and an rms error of 38 cm: the mean observed runup elevation for all experiments was 144 cm. On intermediate and reflective beaches with complex foreshore topography, the use of an alongshore-averaged beach slope in practical applications of the runup parameterization may result in a relative runup error equal to 51% of the fractional variability between the measured and the averaged slope.
Parameterization of daily solar global ultraviolet irradiation.
Feister, U; Jäkel, E; Gericke, K
2002-09-01
Daily values of solar global ultraviolet (UV) B and UVA irradiation as well as erythemal irradiation have been parameterized to be estimated from pyranometer measurements of daily global and diffuse irradiation as well as from atmospheric column ozone. Data recorded at the Meteorological Observatory Potsdam (52 degrees N, 107 m asl) in Germany over the time period 1997-2000 have been used to derive sets of regression coefficients. The validation of the method against independent data sets of measured UV irradiation shows that the parameterization provides a gain of information for UVB, UVA and erythemal irradiation referring to their averages. A comparison between parameterized daily UV irradiation and independent values of UV irradiation measured at a mountain station in southern Germany (Meteorological Observatory Hohenpeissenberg at 48 degrees N, 977 m asl) indicates that the parameterization also holds even under completely different climatic conditions. On a long-term average (1953-2000), parameterized annual UV irradiation values are 15% and 21% higher for UVA and UVB, respectively, at Hohenpeissenberg than they are at Potsdam. Daily global and diffuse irradiation measured at 28 weather stations of the Deutscher Wetterdienst German Radiation Network and grid values of column ozone from the EPTOMS satellite experiment served as inputs to calculate the estimates of the spatial distribution of daily and annual values of UV irradiation across Germany. Using daily values of global and diffuse irradiation recorded at Potsdam since 1937 as well as atmospheric column ozone measured since 1964 at the same site, estimates of daily and annual UV irradiation have been derived for this site over the period from 1937 through 2000, which include the effects of changes in cloudiness, in aerosols and, at least for the period of ozone measurements from 1964 to 2000, in atmospheric ozone. It is shown that the extremely low ozone values observed mainly after the eruption of Mt
Control of shortwave radiation parameterization on tropical climate SST-forced simulation
NASA Astrophysics Data System (ADS)
Crétat, Julien; Masson, Sébastien; Berthet, Sarah; Samson, Guillaume; Terray, Pascal; Dudhia, Jimy; Pinsard, Françoise; Hourdin, Christophe
2016-09-01
SST-forced tropical-channel simulations are used to quantify the control of shortwave (SW) parameterization on the mean tropical climate compared to other major model settings (convection, boundary layer turbulence, vertical and horizontal resolutions), and to pinpoint the physical mechanisms whereby this control manifests. Analyses focus on the spatial distribution and magnitude of the net SW radiation budget at the surface (SWnet_SFC), latent heat fluxes, and rainfall at the annual timescale. The model skill and sensitivity to the tested settings are quantified relative to observations and using an ensemble approach. Persistent biases include overestimated SWnet_SFC and too intense hydrological cycle. However, model skill is mainly controlled by SW parameterization, especially the magnitude of SWnet_SFC and rainfall and both the spatial distribution and magnitude of latent heat fluxes over ocean. On the other hand, the spatial distribution of continental rainfall (SWnet_SFC) is mainly influenced by convection parameterization and horizontal resolution (boundary layer parameterization and orography). Physical understanding of the control of SW parameterization is addressed by analyzing the thermal structure of the atmosphere and conducting sensitivity experiments to O3 absorption and SW scattering coefficient. SW parameterization shapes the stability of the atmosphere in two different ways according to whether surface is coupled to atmosphere or not, while O3 absorption has minor effects in our simulations. Over SST-prescribed regions, increasing the amount of SW absorption warms the atmosphere only because surface temperatures are fixed, resulting in increased atmospheric stability. Over land-atmosphere coupled regions, increasing SW absorption warms both atmospheric and surface temperatures, leading to a shift towards a warmer state and a more intense hydrological cycle. This turns in reversal model behavior between land and sea points, with the SW scheme that
Parameterization of contrail radiative properties for climate studies
NASA Astrophysics Data System (ADS)
Xie, Yu; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Duda, David P.
2012-12-01
The study of contrails and their impact on global climate change requires a cloud model that statistically represents contrail radiative properties. In this study, the microphysical properties of global contrails are statistically analyzed using collocated Moderate Resolution Imaging Spectroradiometer (MODIS) and Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) observations. The MODIS contrail pixels are detected using an automated contrail detection algorithm and a manual technique using the brightness temperature differences between the MODIS 11 and 12 μm channels. The scattering and absorption properties of typical contrail ice crystals are used to determine an appropriate contrail model to minimize the uncertainties arising from the assumptions in a particular cloud model. The depolarization ratio is simulated with a variety of ice crystal habit fractions and matched to the collocated MODIS and CALIOP observations. The contrail habit fractions are determined and used to compute the bulk-scattering properties of contrails. A parameterization of shortwave and longwave contrail optical properties is developed for the spectral bands of the Rapid Radiative Transfer Model (RRTM). The contrail forcing at the top of the atmosphere is investigated using the RRTM and compared with spherical and hexagonal ice cloud models. Contrail forcing is overestimated when spherical ice crystals are used to represent contrails, but if a hexagonal ice cloud model is used, the forcing is underestimated for small particles and overestimated for large particles in comparison to the contrail model developed in this study.
A Simple Parameterization of 3 x 3 Magic Squares
ERIC Educational Resources Information Center
Trenkler, Gotz; Schmidt, Karsten; Trenkler, Dietrich
2012-01-01
In this article a new parameterization of magic squares of order three is presented. This parameterization permits an easy computation of their inverses, eigenvalues, eigenvectors and adjoints. Some attention is paid to the Luoshu, one of the oldest magic squares.
European upper mantle tomography: adaptively parameterized models
NASA Astrophysics Data System (ADS)
Schäfer, J.; Boschi, L.
2009-04-01
We have devised a new algorithm for upper-mantle surface-wave tomography based on adaptive parameterization: i.e. the size of each parameterization pixel depends on the local density of seismic data coverage. The advantage in using this kind of parameterization is that a high resolution can be achieved in regions with dense data coverage while a lower (and cheaper) resolution is kept in regions with low coverage. This way, parameterization is everywhere optimal, both in terms of its computational cost, and of model resolution. This is especially important for data sets with inhomogenous data coverage, as it is usually the case for global seismic databases. The data set we use has an especially good coverage around Switzerland and over central Europe. We focus on periods from 35s to 150s. The final goal of the project is to determine a new model of seismic velocities for the upper mantle underlying Europe and the Mediterranean Basin, of resolution higher than what is currently found in the literature. Our inversions involve regularization via norm and roughness minimization, and this in turn requires that discrete norm and roughness operators associated with our adaptive grid be precisely defined. The discretization of the roughness damping operator in the case of adaptive parameterizations is not as trivial as it is for the uniform ones; important complications arise from the significant lateral variations in the size of pixels. We chose to first define the roughness operator in a spherical harmonic framework, and subsequently translate it to discrete pixels via a linear transformation. Since the smallest pixels we allow in our parameterization have a size of 0.625 °, the spherical-harmonic roughness operator has to be defined up to harmonic degree 899, corresponding to 810.000 harmonic coefficients. This results in considerable computational costs: we conduct the harmonic-pixel transformations on a small Beowulf cluster. We validate our implementation of adaptive
Parameterization of cloud effects on the absorption of solar radiation
NASA Technical Reports Server (NTRS)
Davies, R.
1983-01-01
A radiation parameterization for the NASA Goddard climate model was developed, tested, and implemented. Interactive and off-hire experiments with the climate model to determine the limitations of the present parameterization scheme are summarized. The parameterization of Cloud absorption in terms of solar zeith angle, column water vapors about the cloud top, and cloud liquid water content is discussed.
Cloud parameterization for climate modeling - Status and prospects
NASA Technical Reports Server (NTRS)
Randall, David A.
1989-01-01
The current status of cloud parameterization research is reviewed. It is emphasized that the upper tropospheric stratiform clouds associated with deep convection are both physically important and poorly parameterized in current models. Emerging parameterizations are described in general terms, with emphasis on prognostic cloud water and fractional cloudiness, and how these relate to the problem just mentioned.
Numerical Archetypal Parameterization for Mesoscale Convective Systems
NASA Astrophysics Data System (ADS)
Yano, J. I.
2015-12-01
Vertical shear tends to organize atmospheric moist convection into multiscale coherent structures. Especially, the counter-gradient vertical transport of horizontal momentum by organized convection can enhance the wind shear and transport kinetic energy upscale. However, this process is not represented by traditional parameterizations. The present paper sets the archetypal dynamical models, originally formulated by the second author, into a parameterization context by utilizing a nonhydrostatic anelastic model with segmentally-constant approximation (NAM-SCA). Using a two-dimensional framework as a starting point, NAM-SCA spontaneously generates propagating tropical squall-lines in a sheared environment. A high numerical efficiency is achieved through a novel compression methodology. The numerically-generated archetypes produce vertical profiles of convective momentum transport that are consistent with the analytic archetype.
Aerosol water parameterization: a single parameter framework
NASA Astrophysics Data System (ADS)
Metzger, S.; Steil, B.; Abdelkader, M.; Klingmüller, K.; Xu, L.; Penner, J. E.; Fountoukis, C.; Nenes, A.; Lelieveld, J.
2015-11-01
We introduce a framework to efficiently parameterize the aerosol water uptake for mixtures of semi-volatile and non-volatile compounds, based on the coefficient, νi. This solute specific coefficient was introduced in Metzger et al. (2012) to accurately parameterize the single solution hygroscopic growth, considering the Kelvin effect - accounting for the water uptake of concentrated nanometer sized particles up to dilute solutions, i.e., from the compounds relative humidity of deliquescence (RHD) up to supersaturation (Köhler-theory). Here we extend the νi-parameterization from single to mixed solutions. We evaluate our framework at various levels of complexity, by considering the full gas-liquid-solid partitioning for a comprehensive comparison with reference calculations using the E-AIM, EQUISOLV II, ISORROPIA II models as well as textbook examples. We apply our parameterization in EQSAM4clim, the EQuilibrium Simplified Aerosol Model V4 for climate simulations, implemented in a box model and in the global chemistry-climate model EMAC. Our results show: (i) that the νi-approach enables to analytically solve the entire gas-liquid-solid partitioning and the mixed solution water uptake with sufficient accuracy, (ii) that, e.g., pure ammonium nitrate and mixed ammonium nitrate - ammonium sulfate mixtures can be solved with a simple method, and (iii) that the aerosol optical depth (AOD) simulations are in close agreement with remote sensing observations for the year 2005. Long-term evaluation of the EMAC results based on EQSAM4clim and ISORROPIA II will be presented separately.
A Survey of Shape Parameterization Techniques
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
1999-01-01
This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.
Implicit Shape Parameterization for Kansei Design Methodology
NASA Astrophysics Data System (ADS)
Nordgren, Andreas Kjell; Aoyama, Hideki
Implicit shape parameterization for Kansei design is a procedure that use 3D-models, or concepts, to span a shape space for surfaces in the automotive field. A low-dimensional, yet accurate shape descriptor was found by Principal Component Analysis of an ensemble of point-clouds, which were extracted from mesh-based surfaces modeled in a CAD-program. A theoretical background of the procedure is given along with step-by-step instructions for the required data-processing. The results show that complex surfaces can be described very efficiently, and encode design features by an implicit approach that does not rely on error-prone explicit parameterizations. This provides a very intuitive way to explore shapes for a designer, because various design features can simply be introduced by adding new concepts to the ensemble. Complex shapes have been difficult to analyze with Kansei methods due to the large number of parameters involved, but implicit parameterization of design features provides a low-dimensional shape descriptor for efficient data collection, model-building and analysis of emotional content in 3D-surfaces.
A GCM parameterization for the shortwave radiative properties of water clouds
NASA Technical Reports Server (NTRS)
Slingo, A.
1990-01-01
A new parameterization was developed for predicting the shortwave radiative properties of water clouds, suitable for inclusion in general circulation models (GCMs). The parameterization makes use of the simple relationships found by Slingo and Schrecker, giving the three input parameters required to calculate the cloud radiative properties (the optical depth, single scatter albedo and asymmetry parameter) in terms of the liquid water path and equivalent radius of the drop size distribution. The input parameters are then used to derive the cloud radiative properties, using standard two-stream equations for a single layer. The relationships were originally derived for fairly narrow spectral bands but it was found that it is possible to average the coefficients so as to use a much smaller number of bands, without sacrificing accuracy in calculating the cloud radiative properties. This makes the parameterization fast enough to be included in GCMs. The parameterization was programmed into the radiation scheme used in the U.K. Meteorological Office GCM. This scheme and the 24 band Slingo/Schrecker scheme were compared with each other and with observations, using a variety of published datasets. There is good agreement between the two schemes for both cloud albedo and absorption, even when only four spectral bands are employed in the GCM.
Luchies, Adam C.; Ghoshal, Goutam; O’Brien, William D.; Oelze, Michael L.
2012-01-01
Quantitative ultrasound (QUS) techniques that parameterize the backscattered power spectrum have demonstrated significant promise for ultrasonic tissue characterization. Some QUS parameters, such as the effective scatterer diameter (ESD), require the assumption that the examined medium contains uniform diffuse scatterers. Structures that invalidate this assumption can significantly affect the estimated QUS parameters and decrease performance when classifying disease. In this work, a method was developed to reduce the effects of echoes that invalidate the assumption of diffuse scattering. To accomplish this task, backscattered signal sections containing non-diffuse echoes were identified and removed from the QUS analysis. Parameters estimated from the generalized spectrum (GS) and the Rayleigh SNR parameter were compared for detecting data blocks with non-diffuse echoes. Simulations and experiments were used to evaluate the effectiveness of the method. Experiments consisted of estimating QUS parameters from spontaneous fibroadenomas in rats and from beef liver samples. Results indicated that the method was able to significantly reduce or eliminate the effects of non-diffuse echoes that might exist in the backscattered signal. For example, the average reduction in the relative standard deviation of ESD estimates from simulation, rat fibroadenomas, and beef liver samples were 13%, 30%, and 51%, respectively. The Rayleigh SNR parameter performed best at detecting non-diffuse echoes for the purpose of removing and reducing ESD bias and variance. The method provides a means to improve the diagnostic capabilities of QUS techniques by allowing separate analysis of diffuse and non-diffuse scatterers. PMID:22622974
Parameterization Impacts on Linear Uncertainty Calculation
NASA Astrophysics Data System (ADS)
Fienen, M. N.; Doherty, J.; Reeves, H. W.; Hunt, R. J.
2009-12-01
Efficient linear calculation of model prediction uncertainty can be an insightful diagnostic metric for decision-making. Specifically, the contributions of parameter uncertainty or the location and type of data to prediction uncertainty can be used to evaluate which types of information are most valuable. Information that most significantly reduces prediction uncertainty can be considered to have greater worth. Prediction uncertainty is commonly calculated including or excluding specific information and compared to a base scenario. The quantitative difference in uncertainty with or without the information is indicative of that information's worth in the decision-making process. These results can be calculated at many hypothetical locations to guide network design (i.e., where to install new wells/stream gages/etc.) or used to indicate which parameters are the most important to understand thus likely candidates for future characterization work. We examine a hypothetical case in which an inset model is created from a large regional model in order to better represent a surface stream network and make predictions of head near and flux in a stream due to installation and pumping of a large well near a stream headwater. Parameterization and edge boundary conditions are inherited from the regional model, the simple act of refining discretization and stream geometry shows improvement in the representation of the streams. Even visual inspection of the simulated head field highlights the need to recalibrate and potentially re-parametrize the inset model. A network of potential head observations is evaluated and contoured in the shallowest two layers of the six-layer model to assess their worth in both predicting flux at a specific gage, and head at a specific location near the stream. Three hydraulic conductivity parameterization scenarios are evaluated: using a single multiplier on hydraulic conductivity acting on the inherited hydraulic conductivity zonation using; the
Neutron detector resolution for scattering
Kolda, S.A.
1997-03-01
A resolution function has been determined for scattered neutron experiments at Rensselaer Polytechnic Institute (RPI). This function accounts for the shifting and broadening of the resonance peak due to the additional path length, traveled by the neutron after scattering and prior to detection, along with the broadening of the resonance peak due to the bounce target. This resolution function has been parameterized both in neutron energy and size of the sample disk. Monte Carlo Neutron and Photon (MCNP) modeling has been used to determine the shape of the detector resolution function while assuming that the sample nucleus has an infinite mass. The shape of the function for a monoenergetic neutron point source has been compared to the analytical solution. Additionally, the parameterized detector resolution function has been used to broaden the scatter yield calculated from Evaluated Neutron Data File ENDF/B-VI cross section data for {sup 238}U. The target resolution function has been empirically determined by comparison of the broadened scatter yield and the experimental yield for {sup 238}U. The combined resolution function can be inserted into the SAMMY code to allow resonance analysis for scattering measurements.
Lightning parameterization in a storm electrification model
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.; Farley, Richard D.; Wu, Gang
1988-01-01
The parameterization of an intracloud lightning discharge has been implemented in our Storm Electrification Model. The initiation, propagation direction, termination and charge redistribution of the discharge are approximated assuming overall charge neutrality. Various simulations involving differing amounts of charge transferred have been done. The effects of the lightning-produced ions on the hydrometeor charges, electric field components and electrical energy depend strongly on the charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge show favorable agreement.
A parameterization of the evaporation of rainfall
NASA Technical Reports Server (NTRS)
Schlesinger, Michael E.; Oh, Jai-Ho; Rosenfeld, Daniel
1988-01-01
A general theoretical expression for the rainfall rate and the total evaporation rate as a function of the distance below cloud base is developed, and is then specialized to the gamma raindrop size distribution. The theoretical framework is used to analyze the data of Rosenfeld and Mintz (1988) on the radar observations of the rainfall rate as a function of the distance below cloud base, for rain falling from continental convective cells in central South Africa, obtaining a parameterization for the evaporation of rainfall.
Born approximation, scattering, and algorithm
NASA Astrophysics Data System (ADS)
Martinez, Alex; Hu, Mengqi; Gu, Haicheng; Qiao, Zhijun
2015-05-01
In the past few decades, there were many imaging algorithms designed in the case of the absence of multiple scattering. Recently, we discussed an algorithm for removing high order scattering components from collected data. This paper is a continuation of our previous work. First, we investigate the current state of multiple scattering in SAR. Then, we revise our method and test it. Given an estimate of our target reflectivity, we compute the multi scattering effects in the target region for various frequencies. Furthermore, we propagate this energy through free space towards our antenna, and remove it from the collected data.
Intercomparison of land-surface parameterizations launched
NASA Astrophysics Data System (ADS)
Henderson-Sellers, A.; Dickinson, R. E.
One of the crucial tasks for climatic and hydrological scientists over the next several years will be validating land surface process parameterizations used in climate models. There is not, necessarily, a unique set of parameters to be used. Different scientists will want to attempt to capture processes through various methods “for example, Avissar and Verstraete, 1990”. Validation of some aspects of the available (and proposed) schemes' performance is clearly required. It would also be valuable to compare the behavior of the existing schemes [for example, Dickinson et al., 1991; Henderson-Sellers, 1992a].The WMO-CAS Working Group on Numerical Experimentation (WGNE) and the Science Panel of the GEWEX Continental-Scale International Project (GCIP) [for example, Chahine, 1992] have agreed to launch the joint WGNE/GCIP Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS). The principal goal of this project is to achieve greater understanding of the capabilities and potential applications of existing and new land-surface schemes in atmospheric models. It is not anticipated that a single “best” scheme will emerge. Rather, the aim is to explore alternative models in ways compatible with their authors' or exploiters' goals and to increase understanding of the characteristics of these models in the scientific community.
Mixing parameterizations in ocean climate modeling
NASA Astrophysics Data System (ADS)
Moshonkin, S. N.; Gusev, A. V.; Zalesny, V. B.; Byshev, V. I.
2016-03-01
Results of numerical experiments with an eddy-permitting ocean circulation model on the simulation of the climatic variability of the North Atlantic and the Arctic Ocean are analyzed. We compare the ocean simulation quality with using different subgrid mixing parameterizations. The circulation model is found to be sensitive to a mixing parametrization. The computation of viscosity and diffusivity coefficients by an original splitting algorithm of the evolution equations for turbulence characteristics is found to be as efficient as traditional Monin-Obukhov parameterizations. At the same time, however, the variability of ocean climate characteristics is simulated more adequately. The simulation of salinity fields in the entire study region improves most significantly. Turbulent processes have a large effect on the circulation in the long-term through changes in the density fields. The velocity fields in the Gulf Stream and in the entire North Atlantic Subpolar Cyclonic Gyre are reproduced more realistically. The surface level height in the Arctic Basin is simulated more faithfully, marking the Beaufort Gyre better. The use of the Prandtl number as a function of the Richardson number improves the quality of ocean modeling.
A new parameterization of spectral and broadband ocean surface albedo.
Jin, Zhonghai; Qiao, Yanli; Wang, Yingjian; Fang, Yonghua; Yi, Weining
2011-12-19
A simple yet accurate parameterization of spectral and broadband ocean surface albedo has been developed. To facilitate the parameterization and its applications, the albedo is parameterized for the direct and diffuse incident radiation separately, and then each of them is further divided into two components: the contributions from surface and water, respectively. The four albedo components are independent of each other, hence, altering one will not affect the others. Such a designed parameterization scheme is flexible for any future update. Users can simply replace any of the adopted empirical formulations (e.g., the relationship between foam reflectance and wind speed) as desired without a need to change the parameterization scheme. The parameterization is validated by in situ measurements and can be easily implemented into a climate or radiative transfer model. PMID:22274228
BRAIN SURFACE CONFORMAL PARAMETERIZATION WITH THE RICCI FLOW
Wang, Yalin; Gu, Xianfeng; Chan, Tony F.; Thompson, Paul M.; Yau, Shing-Tung
2013-01-01
In medical imaging, parameterized 3D surface models are of great interest for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. By solving the Yamabe equation with the Ricci flow method, we can conformally parameterize a brain surface via a mapping to a multi-hole disk. The resulting parameterizations do not have any singularities and are intrinsic and stable. To illustrate the technique, we computed parameterizations of cortical surfaces in MRI scans of the brain. We also show the parameterization results are consistent with constraints imposed on the mappings of selected landmark curves, and the resulting surfaces can be matched to each other using constrained harmonic maps. Unlike previous planar conformal parameterization methods, our algorithm does not introduce any singularity points. PMID:21926017
Parameterization of Solar Global Uv Irradiation
NASA Astrophysics Data System (ADS)
Feister, U.; Jaekel, E.; Gericke, K.
Daily doses of solar global UV-B, UV-A, and erythemal irradiation have been param- eterized to be calculated from pyranometer data of global and diffuse irradiation as well as from atmospheric column ozone measured at Potsdam (52 N, 107 m asl). The method has been validated against independent data of measured UV irradiation. A gain of information is provided by use of the parameterization for the three UV compo- nents (UV-B, UV-A and erythemal) referring to average values of UV irradiation. Ap- plying the method to UV irradiation measured at the mountain site Hohenpeissenberg (48 N, 977 m asl) shows that the parameterization even holds under completely differ- ent climatic conditions. On a long-term average (1953 - 2000), parameterized annual UV irradiation values are by 15 % (UV-A) and 21 % (UV-B), respectively, higher at Hohenpeissenberg, than they are at Potsdam. Using measured input data from 27 Ger- man weather stations, the method has been also applied to estimate the spatial distribu- tion of UV irradiation across Germany. Daily global and diffuse irradiation measured at Potsdam (1937 -2000) as well as atmospheric column ozone measured at Potsdam between1964 - 2000 have been used to derive long-term estimates of daily and annual totals of UV irradiation that include the effects of changes in cloudiness, in aerosols and, at least for the period 1964 to 2000, also in atmospheric ozone. It is shown that the extremely low ozone values observed mainly after the volcanic eruptions of Mt. Pinatubo in 1991 have substantially enhanced UV-B irradiation in the first half of the 90ies of the last century. The non-linear long-term changes between 1968 and 2000 amount to +4% ...+5% for annual global and UV-A irradiation mainly due to changing cloudiness, and +14% ... +15% for UV-B and erythemal irradiation due to both chang- ing cloudiness and decreasing column ozone. Estimates of long-term changes in UV irradiation derived from data measured at other German sites are
Optika : a GUI framework for parameterized applications.
Nusbaum, Kurtis L.
2011-06-01
In the field of scientific computing there are many specialized programs designed for specific applications in areas such as biology, chemistry, and physics. These applications are often very powerful and extraordinarily useful in their respective domains. However, some suffer from a common problem: a non-intuitive, poorly-designed user interface. The purpose of Optika is to address this problem and provide a simple, viable solution. Using only a list of parameters passed to it, Optika can dynamically generate a GUI. This allows the user to specify parameters values in a fashion that is much more intuitive than the traditional 'input decks' used by some parameterized scientific applications. By leveraging the power of Optika, these scientific applications will become more accessible and thus allow their designers to reach a much wider audience while requiring minimal extra development effort.
Planet temperatures with surface cooling parameterized
NASA Astrophysics Data System (ADS)
Levenson, Barton Paul
2011-06-01
A semigray (shortwave and longwave) surface temperature model is developed from conditions on Venus, Earth and Mars, where the greenhouse effect is mostly due to carbon dioxide and water vapor. In addition to estimating longwave optical depths, parameterizations are developed for surface cooling due to shortwave absorption in the atmosphere, and for convective (sensible and latent) heat transfer. An approximation to the Clausius-Clapeyron relation provides water-vapor feedback. The resulting iterative algorithm is applied to three "super-Earths" in the Gliese 581 system, including the "Goldilocks" planet g ( Vogt et al., 2010). Surprisingly, none of the three appear habitable. One cannot accurately locate a star's habitable zone without data or assumptions about a planet's atmosphere.
A Genus Oblivious Approach to Cross Parameterization
Bennett, J C; Pascucci, V; Joy, K I
2008-06-16
In this paper we present a robust approach to construct a map between two triangulated meshes, M and M{prime} of arbitrary and possibly unequal genus. We introduce a novel initial alignment scheme that allows the user to identify 'landmark tunnels' and/or a 'constrained silhouette' in addition to the standard landmark vertices. To describe the evolution of non-landmark tunnels we automatically derive a continuous deformation from M to M{prime} using a variational implicit approach. Overall, we achieve a cross parameterization scheme that is provably robust in the sense that it can map M to M{prime} without constraints on their relative genus. We provide a number of examples to demonstrate the practical effectiveness of our scheme between meshes of different genus and shape.
The natural parameterization of cosmic neutrino oscillations
NASA Astrophysics Data System (ADS)
Palladino, Andrea; Vissani, Francesco
2015-09-01
The natural parameterization of vacuum oscillations in three neutrino flavors is studied. Compact and exact relations of its three parameters with the ordinary three mixing angles and CP-violating phase are obtained. Its usefulness is illustrated by considering various applications: the study of the flavor ratio and of its uncertainties, the comparison of expectations and observations in the flavor triangle, and the intensity of the signal due to Glashow resonance. The results in the literature are easily reproduced and in particular the recently obtained agreement of the observations of IceCube with the hypothesis of cosmic neutrino oscillations is confirmed. It is argued that a Gaussian treatment of the errors appropriately describes the effects of the uncertainties on the neutrino oscillation parameters.
Cumulus parameterizations in chemical transport models
NASA Astrophysics Data System (ADS)
Mahowald, Natalie M.; Rasch, Philip J.; Prinn, Ronald G.
1995-12-01
Global three-dimensional chemical transport models (CTMs) are valuable tools for studying processes controlling the distribution of trace constituents in the atmosphere. A major uncertainty in these models is the subgrid-scale parametrization of transport by cumulus convection. This study seeks to define the range of behavior of moist convective schemes and point toward more reliable formulations for inclusion in chemical transport models. The emphasis is on deriving convective transport from meteorological data sets (such as those from the forecast centers) which do not routinely include convective mass fluxes. Seven moist convective parameterizations are compared in a column model to examine the sensitivity of the vertical profile of trace gases to the parameterization used in a global chemical transport model. The moist convective schemes examined are the Emanuel scheme [Emanuel, 1991], the Feichter-Crutzen scheme [Feichter and Crutzen, 1990], the inverse thermodynamic scheme (described in this paper), two versions of a scheme suggested by Hack [Hack, 1994], and two versions of a scheme suggested by Tiedtke (one following the formulation used in the ECMWF (European Centre for Medium-Range Weather Forecasting) and ECHAM3 (European Centre and Hamburg Max-Planck-Institut) models [Tiedtke, 1989], and one formulated as in the TM2 (Transport Model-2) model (M. Heimann, personal communication, 1992). These convective schemes vary in the closure used to derive the mass fluxes, as well as the cloud model formulation, giving a broad range of results. In addition, two boundary layer schemes are compared: a state-of-the-art nonlocal boundary layer scheme [Holtslag and Boville, 1993] and a simple adiabatic mixing scheme described in this paper. Three tests are used to compare the moist convective schemes against observations. Although the tests conducted here cannot conclusively show that one parameterization is better than the others, the tests are a good measure of the
Toward parameterization of the stable boundary layer
NASA Technical Reports Server (NTRS)
Wetzel, P. J.
1982-01-01
Wangara data is used to examine the depth of the nocturnal boundary layer (NBL) and the height to which surface-linked turbulence extends. It is noted that a linearity of virtual temperature profiles has been found to extend up to a significant portion of the NBL, and then diverge where the wind shear rides over the surface-induced turbulence. A series of Richardson numbers are examined for varying degrees of turbulence and the significant cooling region is observed to have greater depth than the depth of the linear relationship layer. A three-layer parameterization of the thermodynamic structure of the NBL is developed so that a system of five equations must be solved when the wind velocity profile and the temperature at the surface are known. A correlation between the bulk Richardson number and the depth of the linear layer was found to be 0.89.
Climate impacts of parameterized Nordic Sea overflows
NASA Astrophysics Data System (ADS)
Danabasoglu, Gokhan; Large, William G.; Briegleb, Bruce P.
2010-11-01
A new overflow parameterization (OFP) of density-driven flows through ocean ridges via narrow, unresolved channels has been developed and implemented in the ocean component of the Community Climate System Model version 4. It represents exchanges from the Nordic Seas and the Antarctic shelves, associated entrainment, and subsequent injection of overflow product waters into the abyssal basins. We investigate the effects of the parameterized Denmark Strait (DS) and Faroe Bank Channel (FBC) overflows on the ocean circulation, showing their impacts on the Atlantic Meridional Overturning Circulation and the North Atlantic climate. The OFP is based on the Marginal Sea Boundary Condition scheme of Price and Yang (1998), but there are significant differences that are described in detail. Two uncoupled (ocean-only) and two fully coupled simulations are analyzed. Each pair consists of one case with the OFP and a control case without this parameterization. In both uncoupled and coupled experiments, the parameterized DS and FBC source volume transports are within the range of observed estimates. The entrainment volume transports remain lower than observational estimates, leading to lower than observed product volume transports. Due to low entrainment, the product and source water properties are too similar. The DS and FBC overflow temperature and salinity properties are in better agreement with observations in the uncoupled case than in the coupled simulation, likely reflecting surface flux differences. The most significant impact of the OFP is the improved North Atlantic Deep Water penetration depth, leading to a much better comparison with the observational data and significantly reducing the chronic, shallow penetration depth bias in level coordinate models. This improvement is due to the deeper penetration of the southward flowing Deep Western Boundary Current. In comparison with control experiments without the OFP, the abyssal ventilation rates increase in the North
Parameterizing Size Distribution in Ice Clouds
DeSlover, Daniel; Mitchell, David L.
2009-09-25
PARAMETERIZING SIZE DISTRIBUTIONS IN ICE CLOUDS David L. Mitchell and Daniel H. DeSlover ABSTRACT An outstanding problem that contributes considerable uncertainty to Global Climate Model (GCM) predictions of future climate is the characterization of ice particle sizes in cirrus clouds. Recent parameterizations of ice cloud effective diameter differ by a factor of three, which, for overcast conditions, often translate to changes in outgoing longwave radiation (OLR) of 55 W m-2 or more. Much of this uncertainty in cirrus particle sizes is related to the problem of ice particle shattering during in situ sampling of the ice particle size distribution (PSD). Ice particles often shatter into many smaller ice fragments upon collision with the rim of the probe inlet tube. These small ice artifacts are counted as real ice crystals, resulting in anomalously high concentrations of small ice crystals (D < 100 µm) and underestimates of the mean and effective size of the PSD. Half of the cirrus cloud optical depth calculated from these in situ measurements can be due to this shattering phenomenon. Another challenge is the determination of ice and liquid water amounts in mixed phase clouds. Mixed phase clouds in the Arctic contain mostly liquid water, and the presence of ice is important for determining their lifecycle. Colder high clouds between -20 and -36 oC may also be mixed phase but in this case their condensate is mostly ice with low levels of liquid water. Rather than affecting their lifecycle, the presence of liquid dramatically affects the cloud optical properties, which affects cloud-climate feedback processes in GCMs. This project has made advancements in solving both of these problems. Regarding the first problem, PSD in ice clouds are uncertain due to the inability to reliably measure the concentrations of the smallest crystals (D < 100 µm), known as the “small mode”. Rather than using in situ probe measurements aboard aircraft, we employed a treatment of ice
On the factorization and fitting of molecular scattering information
NASA Technical Reports Server (NTRS)
Goldflam, R.; Kouri, D. J.; Green, S.
1977-01-01
The reported analysis is based on the factored IOS T-matrix. It is shown that line shape measurements may be used over a range of temperatures to evaluate inelastic scattering cross sections. Basic factorization or parameterization relations are derived by considering the wavefunction equations. The parameterization of cross sections is considered, taking into account the differential scattering amplitude and cross section, integral cross sections, phenomenological cross sections for general relaxation processes, and viscosity and diffusion cross sections. Thermal averages and rates are discussed, giving attention to integral cross sections and rates, and general phenomenological cross sections. The results of computational studies are also presented.
Parameterized Complexity of Eulerian Deletion Problems.
Cygan, Marek; Marx, Dániel; Pilipczuk, Marcin; Pilipczuk, Michał; Schlotter, Ildikó
2014-01-01
We study a family of problems where the goal is to make a graph Eulerian, i.e., connected and with all the vertices having even degrees, by a minimum number of deletions. We completely classify the parameterized complexity of various versions: undirected or directed graphs, vertex or edge deletions, with or without the requirement of connectivity, etc. The collection of results shows an interesting contrast: while the node-deletion variants remain intractable, i.e., W[1]-hard for all the studied cases, edge-deletion problems are either fixed-parameter tractable or polynomial-time solvable. Of particular interest is a randomized FPT algorithm for making an undirected graph Eulerian by deleting the minimum number of edges, based on a novel application of the color coding technique. For versions that remain NP-complete but fixed-parameter tractable we consider also possibilities of polynomial kernelization; unfortunately, we prove that this is not possible unless NP⊆coNP/poly. PMID:24415818
Brain Surface Conformal Parameterization Using Riemann Surface Structure
Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung
2011-01-01
In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
NASA Astrophysics Data System (ADS)
Li, F.; Zeng, X. D.; Levis, S.
2012-03-01
A process-based fire parameterization of intermediate complexity has been developed for global simulations in the framework of a Dynamic Global Vegetation Model (DGVM) in an Earth System Model (ESM). Burned area in a grid cell is estimated by the product of fire counts and average burned area per fire. The scheme comprises three parts: fire occurrence, fire spread, and fire impact. In the fire occurrence part, fire counts rather than fire occurrence probability is calculated in order to capture the observed high burned area fraction in regions where fire occurs frequently. In the fire spread part, post-fire region of a fire is assumed to be elliptical in shape. Mathematical properties of ellipses and mathematical derivation are applied to remove redundant and unreasonable equation and assumptions in existing fire spread parameterization. In the fire impact part, trace gas and aerosol emissions due to biomass burning are estimated, which offers an interface with atmospheric chemistry and aerosol models in ESMs. In addition, flexible time-step length makes the new fire parameterization easily applied to various DGVMs. Global performance of the new fire parameterization is assessed by using an improved version of the Community Land Model version 3 with the Dynamic Global Vegetation Model (CLM-DGVM). Simulations are compared against the latest satellite-based Global Fire Emission Database version 3 (GFED3) for 1997-2004. Results show that simulated global totals and spatial patterns of burned area and fire carbon emissions, global annual burned area fractions for various vegetation types and interannual variability of burned area are in close agreement with the GFED3, and more accurate than CLM-DGVM simulations with the commonly used Glob-FIRM fire parameterization and the old fire module of CLM-DGVM. Furthermore, the average relative error of simulated trace gas and aerosol emissions due to biomass burning is 7 %. Results suggest that the new fire parameterization may
Concept of polarization entropy in optical scattering
NASA Astrophysics Data System (ADS)
Cloude, Shane R.; Pottier, Eric
1995-06-01
We consider the application of the general theory of unitary matrices to problems of wave scattering involving polarized waves. Having outlined useful parameterizations of the low dimensional groups associated with these unitary matrices, we develop a general processing strategy, which we suggest has application in the extraction of physical information from a range of scattering matrices in optics. Examples are presented of applying the unitary matrix structure to problems of single and multiple scattering from a cloud of random particles. The techniques are best suited to characterization of depolarizing systems, where the scattered waves undergo a change of degree as well as polarization state. The degree of disorder of the system is then quantified by a scalar, the polarimetric entropy, defined from the eigenvalues of a scattering matrix that ranges from 0 for systems with zero scattering to 1 for perfect depolarizers. Further, we show that the unitary matrix parameterization can be used to extract important system information from the eigenvectors of this matrix.
Climate and the equilibrium state of land surface hydrology parameterizations
NASA Technical Reports Server (NTRS)
Entekhabi, Dara; Eagleson, Peter S.
1991-01-01
For given climatic rates of precipitation and potential evaporation, the land surface hydrology parameterizations of atmospheric general circulation models will maintain soil-water storage conditions that balance the moisture input and output. The surface relative soil saturation for such climatic conditions serves as a measure of the land surface parameterization state under a given forcing. The equilibrium value of this variable for alternate parameterizations of land surface hydrology are determined as a function of climate and the sensitivity of the surface to shifts and changes in climatic forcing are estimated.
CORTICAL SURFACE PARAMETERIZATION BY P-HARMONIC ENERGY MINIMIZATION
JOSHI, ANAND A.; SHATTUCK, DAVID W.; THOMPSON, PAUL M.; LEAHY, RICHARD M.
2010-01-01
Cortical surface parameterization has several applications in visualization and analysis of the brain surface. Here we propose a scheme for parameterizing the surface of the cerebral cortex. The parameterization is formulated as the minimization of an energy functional in the pth norm. A numerical method for obtaining the solution is also presented. Brain surfaces from multiple subjects are brought into common parameter space using the scheme. 3D spatial averages of the cortical surfaces are generated by using the correspondences induced by common parameter space. PMID:20721316
Parameterization of cloud glaciation by atmospheric dust
NASA Astrophysics Data System (ADS)
Nickovic, Slobodan; Cvetkovic, Bojan; Madonna, Fabio; Pejanovic, Goran; Petkovic, Slavko
2016-04-01
The exponential growth of research interest on ice nucleation (IN) is motivated, inter alias, by needs to improve generally unsatisfactory representation of cold cloud formation in atmospheric models, and therefore to increase the accuracy of weather and climate predictions, including better forecasting of precipitation. Research shows that mineral dust significantly contributes to cloud ice nucleation. Samples of residual particles in cloud ice crystals collected by aircraft measurements performed in the upper tropopause of regions distant from desert sources indicate that dust particles dominate over other known ice nuclei such as soot and biological particles. In the nucleation process, dust chemical aging had minor effects. The observational evidence on IN processes has substantially improved over the last decade and clearly shows that there is a significant correlation between IN concentrations and the concentrations of coarser aerosol at a given temperature and moisture. Most recently, due to recognition of the dominant role of dust as ice nuclei, parameterizations for immersion and deposition icing specifically due to dust have been developed. Based on these achievements, we have developed a real-time forecasting coupled atmosphere-dust modelling system capable to operationally predict occurrence of cold clouds generated by dust. We have been thoroughly validated model simulations against available remote sensing observations. We have used the CNR-IMAA Potenza lidar and cloud radar observations to explore the model capability to represent vertical features of the cloud and aerosol vertical profiles. We also utilized the MSG-SEVIRI and MODIS satellite data to examine the accuracy of the simulated horizontal distribution of cold clouds. Based on the obtained encouraging verification scores, operational experimental prediction of ice clouds nucleated by dust has been introduced in the Serbian Hydrometeorological Service as a public available product.
Parameterization of cirrus optical depth and cloud fraction
Soden, B.
1995-09-01
This research illustrates the utility of combining satellite observations and operational analysis for the evaluation of parameterizations. A parameterization based on ice water path (IWP) captures the observed spatial patterns of tropical cirrus optical depth. The strong temperature dependence of cirrus ice water path in both the observations and the parameterization is probably responsible for the good correlation where it exists. Poorer agreement is found in Southern Hemisphere mid-latitudes where the temperature dependence breaks down. Uncertainties in effective radius limit quantitative validation of the parameterization (and its inclusion into GCMs). Also, it is found that monthly mean cloud cover can be predicted within an RMS error of 10% using ECMWF relative humidity corrected by TOVS Upper Troposphere Humidity. 1 ref., 2 figs.
Some applications of parameterized Picard-Vessiot theory
NASA Astrophysics Data System (ADS)
Mitschi, C.
2016-02-01
This is an expository article describing some applications of parameterized Picard-Vessiot theory. This Galois theory for parameterized linear differential equations was Cassidy and Singer's contribution to an earlier volume dedicated to the memory of Andrey Bolibrukh. The main results we present here were obtained for families of ordinary differential equations with parameterized regular singularities in joint work with Singer. They include parametric versions of Schlesinger's theorem and of the weak Riemann-Hilbert problem as well as an algebraic characterization of a special type of monodromy evolving deformations illustrated by the classical Darboux-Halphen equation. Some of these results have recently been applied by different authors to solve the inverse problem of parameterized Picard-Vessiot theory, and were also generalized to irregular singularities. We sketch some of these results by other authors. The paper includes a brief history of the Darboux-Halphen equation as well as an appendix on differentially closed fields.
Brydegaard, Mikkel
2015-01-01
In recent years, the field of remote sensing of birds and insects in the atmosphere (the aerial fauna) has advanced considerably, and modern electro-optic methods now allow the assessment of the abundance and fluxes of pests and beneficials on a landscape scale. These techniques have the potential to significantly increase our understanding of, and ability to quantify and manage, the ecological environment. This paper presents a concept whereby laser radar observations of atmospheric fauna can be parameterized and table values for absolute cross sections can be catalogued to allow for the study of focal species such as disease vectors and pests. Wing-beat oscillations are parameterized with a discrete set of harmonics and the spherical scatter function is parameterized by a reduced set of symmetrical spherical harmonics. A first order spherical model for insect scatter is presented and supported experimentally, showing angular dependence of wing beat harmonic content. The presented method promises to give insights into the flight heading directions of species in the atmosphere and has the potential to shed light onto the km-range spread of pests and disease vectors. PMID:26295706
NASA Astrophysics Data System (ADS)
Grell, Evelyn; Grell, Georg; Bao, Jian-Wen
2013-04-01
Results from numerical experiments using high-resolution mesoscale models have presented evidence that the use of the explicit microphysics scheme only at grid spacing from few hundred meters to a few kilometers is often not sufficient to neutralize moist instability within the grid box. A consequence of such a problem is that artificial grid-point storms may occur, which in tropical cyclone simulations can lead to erroneous representation of tropical cyclone development. The use of conventional sub-grid convection parameterization schemes to alleviate artificial grid-point storms is not appropriate in this situation since these schemes assume that the updraft area is much smaller than the model grid spacing and this assumption becomes invalid when the grid size is a few kilometers or smaller. A sub-grid convection scheme suitable for high-resolution mesoscale models has been developed by Grell and Freitas (2013) to remove the aforementioned assumption used in conventional sub-grid convection parameterization schemes. This scheme can be used for grid spacing equal to or smaller than a few kilometers to help sufficiently remove moist instability for the entire grid point. This scheme behaves similarly to conventional schemes when the updraft area is much smaller than the grids size. As the updraft area in a grid box approaches the grid size, the parameterized sub-grid convection gradually diminishes. This presentation highlights major results from experimenting with this newly developed scheme in the Advanced Research WRF (ARW) model with an idealized tropical cyclone intensification case. We will demonstrate the scheme converges (i.e., the parameterized convection diminishes as the updraft area in a grid box approaches the grid size) using the change of the intensity of parameterized sub-grid convection with the decrease in grid size. We will also discuss the issues and challenges in refining this scheme for its application in operational models.
A Gaussian-product stochastic Gent-McWilliams parameterization
NASA Astrophysics Data System (ADS)
Grooms, Ian
2016-10-01
The locally-averaged horizontal buoyancy flux by mesoscale eddies is computed from eddy-resolving quasigeostrophic simulations of ocean-mesoscale eddy dynamics. This flux has a very non-Gaussian distribution peaked at zero, not at the mean value. This non-Gaussian flux distribution arises because the flux is a product of zero-mean random variables: the eddy velocity and buoyancy. A framework for stochastic Gent-McWilliams (GM) parameterization is presented. Gaussian random field models for subgrid-scale velocity and buoyancy are developed. The product of these Gaussian random fields is used to construct a non-Gaussian stochastic parameterization of the horizontal subgrid-scale density flux, which leads to a non-Gaussian stochastic GM parameterization. This new non-Gaussian stochastic GM parameterization is tested in an idealized box ocean model, and compared to a Gaussian approach that simply multiplies the deterministic GM parameterization by a Gaussian random field. The non-Gaussian approach has a significant impact on both the mean and variability of the simulations, more so than the Gaussian approach; for example, the non-Gaussian simulation has a much larger net kinetic energy and a stronger overturning circulation than a comparable Gaussian simulation. Future directions for development of the stochastic GM parameterization and extensions of the Gaussian-product approach are discussed.
Parameter Estimation and Parameterization Uncertainty Using Bayesian Model Averaging
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2007-12-01
This study proposes Bayesian model averaging (BMA) to address parameter estimation uncertainty arisen from non-uniqueness in parameterization methods. BMA provides a means of incorporating multiple parameterization methods for prediction through the law of total probability, with which an ensemble average of hydraulic conductivity distribution is obtained. Estimation uncertainty is described by the BMA variances, which contain variances within and between parameterization methods. BMA shows the facts that considering more parameterization methods tends to increase estimation uncertainty and estimation uncertainty is always underestimated using a single parameterization method. Two major problems in applying BMA to hydraulic conductivity estimation using a groundwater inverse method will be discussed in the study. The first problem is the use of posterior probabilities in BMA, which tends to single out one best method and discard other good methods. This problem arises from Occam's window that only accepts models in a very narrow range. We propose a variance window to replace Occam's window to cope with this problem. The second problem is the use of Kashyap information criterion (KIC), which makes BMA tend to prefer high uncertain parameterization methods due to considering the Fisher information matrix. We found that Bayesian information criterion (BIC) is a good approximation to KIC and is able to avoid controversial results. We applied BMA to hydraulic conductivity estimation in the 1,500-foot sand aquifer in East Baton Rouge Parish, Louisiana.
Faster Parameterized Algorithms for Minor Containment
NASA Astrophysics Data System (ADS)
Adler, Isolde; Dorn, Frederic; Fomin, Fedor V.; Sau, Ignasi; Thilikos, Dimitrios M.
The theory of Graph Minors by Robertson and Seymour is one of the deepest and significant theories in modern Combinatorics. This theory has also a strong impact on the recent development of Algorithms, and several areas, like Parameterized Complexity, have roots in Graph Minors. Until very recently it was a common belief that Graph Minors Theory is mainly of theoretical importance. However, it appears that many deep results from Robertson and Seymour's theory can be also used in the design of practical algorithms. Minor containment testing is one of algorithmically most important and technical parts of the theory, and minor containment in graphs of bounded branchwidth is a basic ingredient of this algorithm. In order to implement minor containment testing on graphs of bounded branchwidth, Hicks [NETWORKS 04] described an algorithm, that in time O(3^{k^2}\\cdot (h+k-1)!\\cdot m) decides if a graph G with m edges and branchwidth k, contains a fixed graph H on h vertices as a minor. That algorithm follows the ideas introduced by Robertson and Seymour in [J'CTSB 95]. In this work we improve the dependence on k of Hicks' result by showing that checking if H is a minor of G can be done in time O(2^{(2k +1 )\\cdot log k} \\cdot h^{2k} \\cdot 2^{2h^2} \\cdot m). Our approach is based on a combinatorial object called rooted packing, which captures the properties of the potential models of subgraphs of H that we seek in our dynamic programming algorithm. This formulation with rooted packings allows us to speed up the algorithm when G is embedded in a fixed surface, obtaining the first single-exponential algorithm for minor containment testing. Namely, it runs in time 2^{O(k)} \\cdot h^{2k} \\cdot 2^{O(h)} \\cdot n, with n = |V(G)|. Finally, we show that slight modifications of our algorithm permit to solve some related problems within the same time bounds, like induced minor or contraction minor containment.
NASA Astrophysics Data System (ADS)
Ouwersloot, H. G.; van Stratum, B. J.; Vila-Guerau Arellano, J.; Sikma, M.; Krol, M. C.; Lelieveld, J.
2013-12-01
We investigate the vertical transport of moisture and atmospheric chemical reactants from the sub-cloud layer to the cumulus cloud layer related to the kinematic mass flux that is driven by shallow convection over land. The dynamical and chemical assumptions needed for mesoscale and global chemistry-transport model parameterizations are systematically analysed using numerical experiments performed by a Large-Eddy Simulation (LES) model. First, we identify and discuss the four primary feedback mechanisms between sub-cloud layer dynamics and mass-flux transport by shallow cumulus clouds for typical mid-latitude conditions. These feedbacks involve mixed-layer drying and heating, changing the moisture variability at the sub-cloud layer top and adjusting entrainment. Based on this analysis and LES experiments, we design parameterizations for cloud properties and mass-flux transport of air and moisture that can be applied to large-scale models. As an intermediate step, we incorporate the parameterizations in a conceptual mixed-layer model, which enables us to study these interplays in more detail. By comparing the results of this model with LES case studies, we show for a wide range of conditions that the new parameterizations enable the model to reproduce the sub-cloud layer dynamics and the four aforementioned feedbacks. However, by considering heterogeneous sensible and latent heat fluxes at the surface, we demonstrate that the parameterizations are sensitive to specific boundary conditions due to changes in the boundary-layer dynamics. Second, we extend the investigation to determine whether the parameterizations are suitable for tropical conditions and to represent the transport of reactants. The numerical experiments in this analysis are inspired by observations over the Amazon during the dry season. Isoprene, a key atmospheric compound over the tropical rain forest, decreases by 8.5 % hr-1 on average and 15 % hr-1 at maximum due to mass-flux induced removal. The
Paluszkiewicz, T.; Hibler, L.F.; Romea, R.D.
1995-01-01
The current generation of ocean general circulation models (OGCMS) uses a convective adjustment scheme to remove static instabilities and to parameterize shallow and deep convection. In simulations used to examine climate-related scenarios, investigators found that in the Arctic regions, the OGCM simulations did not produce a realistic vertical density structure, did not create the correct quantity of deep water, and did not use a time-scale of adjustment that is in agreement with tracer ages or observations. A possible weakness of the models is that the convective adjustment scheme does not represent the process of deep convection adequately. Consequently, a penetrative plume mixing scheme has been developed to parameterize the process of deep open-ocean convection in OGCMS. This new deep convection parameterization was incorporated into the Semtner and Chervin (1988) OGCM. The modified model (with the new parameterization) was run in a simplified Nordic Seas test basin: under a cyclonic wind stress and cooling, stratification of the basin-scale gyre is eroded and deep mixing occurs in the center of the gyre. In contrast, in the OGCM experiment that uses the standard convective adjustment algorithm, mixing is delayed and is wide-spread over the gyre.
How certain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Gharari, Shervan; Hrachowitz, Markus; Fenicia, Fabrizio; Matgen, Patrick; Razavi, Saman; Savenije, Hubert; Gupta, Hoshin; Wheater, Howard
2016-04-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
How uncertain are the process parameterizations in our models?
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Gharari, S.; Gupta, H. V.; Fenicia, F.; Matgen, P.; Savenije, H.
2015-12-01
Environmental models are abstract simplifications of real systems. As a result, the elements of these models, including system architecture (structure), process parameterization and parameters inherit a high level of approximation and simplification. In a conventional model building exercise the parameter values are the only elements of a model which can vary while the rest of the modeling elements are often fixed a priori and therefore not subjected to change. Once chosen the process parametrization and model structure usually remains the same throughout the modeling process. The only flexibility comes from the changing parameter values, thereby enabling these models to reproduce the desired observation. This part of modeling practice, parameter identification and uncertainty, has attracted a significant attention in the literature during the last years. However what remains unexplored in our point of view is to what extent the process parameterization and system architecture (model structure) can support each other. In other words "Does a specific form of process parameterization emerge for a specific model given its system architecture and data while no or little assumption has been made about the process parameterization itself? In this study we relax the assumption regarding a specific pre-determined form for the process parameterizations of a rainfall/runoff model and examine how varying the complexity of the system architecture can lead to different or possibly contradictory parameterization forms than what would have been decided otherwise. This comparison implicitly and explicitly provides us with an assessment of how uncertain is our perception of model process parameterization in respect to the extent the data can support.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.
2009-01-01
CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
Meshless thin-shell simulation based on global conformal parameterization.
Guo, Xiaohu; Li, Xin; Bao, Yunfan; Gu, Xianfeng; Qin, Hong
2006-01-01
This paper presents a new approach to the physically-based thin-shell simulation of point-sampled geometry via explicit, global conformal point-surface parameterization and meshless dynamics. The point-based global parameterization is founded upon the rigorous mathematics of Riemann surface theory and Hodge theory. The parameterization is globally conformal everywhere except for a minimum number of zero points. Within our parameterization framework, any well-sampled point surface is functionally equivalent to a manifold, enabling popular and powerful surface-based modeling and physically-based simulation tools to be readily adapted for point geometry processing and animation. In addition, we propose a meshless surface computational paradigm in which the partial differential equations (for dynamic physical simulation) can be applied and solved directly over point samples via Moving Least Squares (MLS) shape functions defined on the global parametric domain without explicit connectivity information. The global conformal parameterization provides a common domain to facilitate accurate meshless simulation and efficient discontinuity modeling for complex branching cracks. Through our experiments on thin-shell elastic deformation and fracture simulation, we demonstrate that our integrative method is very natural, and that it has great potential to further broaden the application scope of point-sampled geometry in graphics and relevant fields.
Functional parameterization for hydraulic conductivity inversion with uncertainty quantification
NASA Astrophysics Data System (ADS)
Jiao, Jianying; Zhang, Ye
2015-05-01
Functional inversion based on local approximate solutions (LAS) is developed for steady-state flow in heterogeneous aquifers. The method employs a set of LAS of flow to impose spatial continuity of hydraulic head and Darcy fluxes in the solution domain, which are conditioned to limited measurements. Hydraulic conductivity is first parameterized as piecewise continuous, which requires the addition of a smoothness constraint to reduce inversion artifacts. Alternatively, it is formulated as piecewise constant, for which the smoothness constraint is not required, but the data requirement is much higher. Success of the inversion with both parameterizations is demonstrated for both one-dimensional synthetic examples and an oil-field permeability profile. When measurement errors are increased, estimation becomes less accurate but the solution is stable, i.e., estimation errors remain bounded. Compared to piecewise constant parameterization, piecewise continuous parameterization leads to more stable and accurate inversion. Moreover, conductivity variation can also be captured at two spatial scales reflecting sub-facies smooth-varying heterogeneity as well as abrupt changes at facies boundaries. By combining inversion with geostatistical simulation, uncertainty in the estimated conductivity and the hydraulic head field can be quantified. For a given measurement dataset, inversion accuracy and estimation uncertainty with the piecewise continuous parameterization is not sensitive to increasing conductivity contrast.
NASA Astrophysics Data System (ADS)
Gladish, James C.; Duncan, Donald D.
2016-05-01
Liquid crystal variable retarders (LCVRs) are computer-controlled birefringent devices that contain nanometer-sized birefringent liquid crystals (LCs). These devices impart retardance effects through a global, uniform orientation change of the LCs, which is based on a user-defined drive voltage input. In other words, the LC structural organization dictates the device functionality. The LC structural organization also produces a spectral scatter component which exhibits an inverse power law dependence. We investigate LC structural organization by measuring the voltage-dependent LC spectral scattering signature with an integrating sphere and then relate this observable to a fractal-Born model based on the Born approximation and a Von Kármán spectrum. We obtain LCVR light scattering spectra at various drive voltages (i.e., different LC orientations) and then parameterize LCVR structural organization with voltage-dependent correlation lengths. The results can aid in determining performance characteristics of systems using LCVRs and can provide insight into interpreting structural organization measurements.
Parameterized reduced-order models using hyper-dual numbers.
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize the effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.
Baroclinic adjustment. [parameterization of eddy flux effects on atmospheric temperature
NASA Technical Reports Server (NTRS)
Stone, P. H.
1978-01-01
A detailed comparison is presented of the actual shear in the atmosphere with the critical shear given by the two-layer model of Phillips (1954), in which there is a critical temperature gradient separating stable conditions from baroclinically unstable ones. A very simple parameterization of the effect of eddy fluxes on atmospheric temperature is suggested, where the parameterization includes beta effects. The parameterization is illustrated by applying it in a one-dimensional heat-balance climate model. Enhancement of the eddy flux in a continuous atmosphere under supercritical conditions is stressed. This enhancement leads to a negative feedback between the meridional eddy flux of heat and the meridional temperature gradient. The feedback restricts gradients to values near the critical value, a process referred to as baroclinic adjustment. This should facilitate the development of simple climate models involving feedbacks associated with both the meridional vertical temperature structure.
Cloud-radiation interactions and their parameterization in climate models
NASA Technical Reports Server (NTRS)
1994-01-01
This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18-20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth's surface, and to refine the process models which are used to develop advanced cloud parameterizations.
Cloud-radiation interactions and their parameterization in climate models
1994-11-01
This report contains papers from the International Workshop on Cloud-Radiation Interactions and Their Parameterization in Climate Models met on 18--20 October 1993 in Camp Springs, Maryland, USA. It was organized by the Joint Working Group on Clouds and Radiation of the International Association of Meteorology and Atmospheric Sciences. Recommendations were grouped into three broad areas: (1) general circulation models (GCMs), (2) satellite studies, and (3) process studies. Each of the panels developed recommendations on the. themes of the workshop. Explicitly or implicitly, each panel independently recommended observations of basic cloud microphysical properties (water content, phase, size) on the scales resolved by GCMs. Such observations are necessary to validate cloud parameterizations in GCMs, to use satellite data to infer radiative forcing in the atmosphere and at the earth`s surface, and to refine the process models which are used to develop advanced cloud parameterizations.
Parameterization of and Brine Storage in MOR Hydrothermal Systems
NASA Astrophysics Data System (ADS)
Hoover, J.; Lowell, R. P.; Cummings, K. B.
2009-12-01
Single-pass parameterized models of high-temperature hydrothermal systems at oceanic spreading centers use observational constraints such as vent temperature, heat output, vent field area, and the area of heat extraction from the sub-axial magma chamber to deduce fundamental hydrothermal parameters such as total mass flux Q, bulk permeability k, and the thickness of the conductive boundary layer at the base of the system, δ. Of the more than 300 known systems, constraining data are available for less than 10%. Here we use the single pass model to estimate Q, k, and δ for all the seafloor hydrothermal systems for which the constraining data are available. Mean values of Q, k, and δ are 170 kg/s, 5.0x10-13 m2, and 20 m, respectively; which is similar to results obtained from the generic model. There is no apparent correlation with spreading rate. Using observed vent field lifetimes, the rate of magma replenishment can also be calculated. Essentially all high-temperature hydrothermal systems at oceanic spreading centers undergo phase separation, yielding a low chlorinity vapor and a high salinity brine. Some systems such as the Main Endeavour Field on the Juan de Fuca Ridge and the 9°50’N sites on the East Pacific Rise vent low chlorinity vapor for many years, while the high density brine remains sequestered beneath the seafloor. In an attempt to further understand the brine storage at the EPR, we used the mass flux Q determined above, time series of vent salinity and temperature, and the depth of the magma chamber to determine the rate of brine production at depth. We found thicknesses ranging from 0.32 meters to ~57 meters over a 1 km2 area from 1994-2002. These calculations suggest that brine maybe being stored within the conductive boundary layer without a need for lateral transport or removal by other means. We plan to use the numerical code FISHES to further test this idea.
Berg, Larry K.; Shrivastava, ManishKumar B.; Easter, Richard C.; Fast, Jerome D.; Chapman, Elaine G.; Liu, Ying
2015-01-01
A new treatment of cloud-aerosol interactions within parameterized shallow and deep convection has been implemented in WRF-Chem that can be used to better understand the aerosol lifecycle over regional to synoptic scales. The modifications to the model to represent cloud-aerosol interactions include treatment of the cloud dropletnumber mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. Thesechanges have been implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch cumulus parameterization that has been modified to better represent shallow convective clouds. Preliminary testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS) as well as a high-resolution simulation that does not include parameterized convection. The simulation results are used to investigate the impact of cloud-aerosol interactions on the regional scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +35% for sulfate in non-precipitating conditions due to the sulfate production in the parameterized clouds. The modifications to WRF-Chem version 3.2.1 are found to account for changes in the cloud drop number concentration (CDNC) and changes in the chemical composition of cloud-drop residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to WRF-Chem version 3.5, and it is anticipated that they
Electromagnetic scattering: applications to atmospheric sciences and beyond
NASA Astrophysics Data System (ADS)
Yang, P.
2015-12-01
Atmospheric particles (cloud droplets, ice crystals and aerosol particles) scatter and absorb solar radiation and thermal infrared emission, and play an important role in the radiation budget in the earth-atmosphere coupled system, and hence are essential to the earth's climate. In this talk I will briefly review electromagnetic scattering research with a focus on applications to atmospheric radiation parameterization and remote sensing. Specifically, I will review state-of-the-art modeling capabilities in computing the single-scattering properties of dielectric particles. Furthermore, I will illustrate some examples of relevant applications.
Constraining the Parameterization of Polar Inertia Gravity Waves in WACCM with Observations
NASA Astrophysics Data System (ADS)
Smith, A. K.; Murphy, D. J.; Garcia, R. R.; Kinnison, D. E.
2014-12-01
A discrepancy that has been seen in a number of climate models is that simulated temperatures in the Antarctic lower stratosphere during winter and spring are much lower than observed; this is referred to as the "cold pole" problem. Recent simulations with the NCAR Whole Atmosphere Community Climate Model have shown that polar stratospheric temperatures are much improved by including a parameterization of gravity waves, which have inertial periods, longer horizontal wavelengths and shorter vertical wavelengths than the mesoscale gravity waves already parameterized in this and most other middle atmosphere models. Improvements include a more realistic seasonal development of the ozone hole and somewhat better timing for the winter to summer transition in the zonal winds and Brewer-Dobson Circulation. Although the availability and quality of observations of gravity waves in the middle atmosphere has been increasing, there are still not sufficient observations to validate the inertial gravity wave morphology and distribution in the model. Here, we use constraints from new analyses of radiosonde observations to provide guidance for the horizontal and vertical wavelengths of the waves, their seasonal variability, and their potential sources such as fronts or flow imbalance. Tighter observational constraints remove an element of arbitrary "tuning" and tie the model simulations of the middle atmosphere more closely to the simulated climate.
CLOUD PARAMETERIZATIONS, CLOUD PHYSICS, AND THEIR CONNECTIONS: AN OVERVIEW.
LIU,Y.; DAUM,P.H.; CHAI,S.K.; LIU,F.
2002-02-12
This paper consists of three parts. The first part is concerned with the parameterization of cloud microphysics in climate models. We demonstrate the crucial importance of spectral dispersion of the cloud droplet size distribution in determining radiative properties of clouds (e.g., effective radius), and underline the necessity of specifying spectral dispersion in the parameterization of cloud microphysics. It is argued that the inclusion of spectral dispersion makes the issue of cloud parameterization essentially equivalent to that of the droplet size distribution function, bringing cloud parameterization to the forefront of cloud physics. The second part is concerned with theoretical investigations into the spectral shape of droplet size distributions in cloud physics. After briefly reviewing the mainstream theories (including entrainment and mixing theories, and stochastic theories), we discuss their deficiencies and the need for a paradigm shift from reductionist approaches to systems approaches. A systems theory that has recently been formulated by utilizing ideas from statistical physics and information theory is discussed, along with the major results derived from it. It is shown that the systems formalism not only easily explains many puzzles that have been frustrating the mainstream theories, but also reveals such new phenomena as scale-dependence of cloud droplet size distributions. The third part is concerned with the potential applications of the systems theory to the specification of spectral dispersion in terms of predictable variables and scale-dependence under different fluctuating environments.
Parameterization of HONO sources in Mega-Cities
NASA Astrophysics Data System (ADS)
Li, Guohui; Zhang, Renyi; Tie, Xuxie; Molina, Luisa
2013-04-01
Nitrous acid (HONO) plays an important role in the photochemistry of the troposphere because the photolysis of HONO is a primary source of the hydroxyl radical (OH) in the early morning. However, the formation or sources of HONO are still poorly understood in the troposphere; hence the representation of the HONO sources in chemical transport models (CTMs) has lack comprehensive consideration. In the present study, the observed HONO, NOx, and aerosols at an urban supersite T0 during the MCMA-2006 field campaign in Mexico City are used to interpret the HONO formation in association with the suggested HONO sources from literature. The HONO source parameterizations are proposed and incorporated into the WRF-CHEM model. Homogeneous sources of HONO include the reaction of NO with OH and excited NO2 with H2O. Four HONO heterogeneous sources are considered: NO2 reaction with semivolatile organics, NO2 reaction with freshly emitted soot, NO2 reaction on aerosol and ground surfaces. Four cases are used in the present study to evaluate the proposed HONO parameterizations during four field campaigns in which HONO measurements are available, including MCMA-2003 and MCMA-2006 (Mexico City Metropolitan Area, Mexico), MIRAGE-2009 (Shanghai, China), and SHARP (Houston, USA). The WRF-CHEM model with the proposed HONO parameterizations performs moderately well in reproducing the observed diurnal variation of HONO concentrations, showing that the HONO parameterizations in the study are reasonable and potentially useful in improving the HONO simulation in CTMs.
Parameterization of HONO sources in Mega-Cities
NASA Astrophysics Data System (ADS)
Li, G.; Zhang, R.; Tie, X.; Molina, L. T.
2013-05-01
Nitrous acid (HONO) plays an important role in the photochemistry of the troposphere because the photolysis of HONO is a primary source of the hydroxyl radical (OH) in the early morning. However, the formation or sources of HONO are still poorly understood in the troposphere and thence the representation of the HONO sources in chemical transport models (CTMs) is lack of comprehensive consideration. In the present study, the observed HONO, NOx, and aerosols at an urban supersite T0 during the MCMA-2006 field campaign in Mexico City are used to interpret the HONO formation in association with the suggested HONO sources from literature. The HONO source parameterizations are proposed and incorporated into the WRF-CHEM model. Homogeneous sources of HONO include the reaction of NO with OH and excited NO2 with H2O. Four HONO heterogeneous sources are considered: NO2 reaction with semivolatile organics, NO2 reaction with freshly emitted soot, NO2 reaction on aerosol and ground surfaces. Four cases are used in the present study to evaluate the proposed HONO parameterizations during four field campaigns in which HONO measurements are available, including MCMA-2003 and MCMA-2006 (Mexico City Metropolitan Area, Mexico), MIRAGE-2009 (Shanghai, China), and SHARP (Houston, USA). The WRF-CHEM model with the proposed HONO parameterizations performs moderately well in reproducing the observed diurnal variation of HONO concentrations, showing that the HONO parameterizations in the study are reasonable and potentially useful in improving the HONO simulation in CTMs.
NASA Astrophysics Data System (ADS)
Yano, J.-I.
2010-06-01
Geophysical models in general, and atmospheric models more specifically, are always limited in spatial resolutions. Due to this limitation, we face with two different needs. The first is a need for knowing (or "downscaling") more spatial details (e.g., precipitation distribution) than having model simulations for practical applications, such as hydrological modelling. The second is a need for "parameterizing" the subgrid-scale physical processes in order to represent the feedbacks of these processes on to the resolved scales (e.g., the convective heating rate). The present article begins by remarking that it is essential to consider the downscaling and parametrization as an "inverse" of each other: downscaling seeks a detail of the subgrid-scale processes, then the parameterization seeks an integrated effect of the former into the resolved scales. A consideration on why those two closely-related operations are traditionally treated separately, gives insights of the fundamental limitations of the current downscalings and parameterizations. The multiresolution analysis (such as those based on wavelet) provides an important conceptual framework for developing a unified formulation for the downscaling and parameterization. In the vocabulary of multiresolution analysis, these two operations may be considered as types of decompression and compression. A new type of a subgrid-scale representation scheme, NAM-SCA (nonhydrostatic anelastic model with segmentally-constant approximation), is introduced under this framework.
Parameterizing Subgrid Orographic Precipitation and Surface Cover in Climate Models
Leung, Lai R.; Ghan, Steven J.
1998-10-01
Previous development of the Pacific Northwest National Laboratory's regional climate model has focused on representing orographic precipitation using a subgrid parameterization where subgrid variations of surface elevation are aggregated to a limited number of elevation classes. An airflow model and a thermodynamic model are used to parameterize the orographic uplift/descent as air parcels cross over mountain barriers or valleys. This paper describes further testing and evaluation of this subgrid parameterization. Building upon this modeling framework, a subgrid vegetation scheme has been developed based on statistical relationships between surface elevation and vegetation. By analyzing high-resolution elevation and vegetation data, a dominant land cover is defined for each elevation band of each model grid cell to account for the subgrid heterogeneity in vegetation. When larger lakes are present, they are distinguished from land within elevation bands and a lake model is used to simulate the thermodynamic properties. The use of the high-resolution vegetation data and the subgrid vegetation scheme has resulted in an improvement in the model's representation of surface cover over the western United States. Simulation using the new vegetation scheme yields a 1 C cooling when compared with a simulation where vegetation was derived from a 30-min global vegetation dataset without subgrid vegetation treatment; this cooling helps to reduce the warm bias previously found in the regional climate model. A 3-yr simulation with the subgrid parameterization in the climate model is compared with observations.
IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5
The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations (~1-km horizontal grid spacing). The UCP accounts for drag ...
The Project for Intercomparison of Land-surface Parameterization Schemes
NASA Technical Reports Server (NTRS)
Henderson-Sellers, A.; Yang, Z.-L.; Dickinson, R. E.
1993-01-01
The Project for Intercomparison of Land-surface Parameterization Schemes (PILPS) is described and the first stage science plan outlined. PILPS is a project designed to improve the parameterization of the continental surface, especially the hydrological, energy, momentum, and carbon exchanges with the atmosphere. The PILPS Science Plan incorporates enhanced documentation, comparison, and validation of continental surface parameterization schemes by community participation. Potential participants include code developers, code users, and those who can provide datasets for validation and who have expertise of value in this exercise. PILPS is an important activity because existing intercomparisons, although piecemeal, demonstrate that there are significant differences in the formulation of individual processes in the available land surface schemes. These differences are comparable to other recognized differences among current global climate models such as cloud and convection parameterizations. It is also clear that too few sensitivity studies have been undertaken with the result that there is not yet enough information to indicate which simplifications or omissions are important for the near-surface continental climate, hydrology, and biogeochemistry. PILPS emphasizes sensitivity studies with and intercomparisons of existing land surface codes and the development of areally extensive datasets for their testing and validation.
Overview of an Urban Canopy Parameterization in COAMPS
Leach, M J; Chin, H S
2006-02-09
The Coupled Atmosphere/Ocean Mesoscale Prediction System (COAMPS) model (Hodur, 1997) was developed at the Naval Research Laboratory. COAMPS has been used at resolutions as small as 2 km to study the role of complex topography in generating mesoscale circulation (Doyle, 1997). The model has been adapted for use in the Atmospheric Science Division at LLNL for both research and operational use. The model is a fully, non-hydrostatic model with several options for turbulence parameterization, cloud processes and radiative transfer. We have recently modified the COAMPS code to include building and other urban surfaces effects in the mesoscale model by incorporating an urban canopy parameterization (UCP) (Chin et al., 2005). This UCP is a modification of the original parameterization of (Brown and Williams, 1998), based on Yamada's (1982) forest canopy parameterization and includes modification of the TKE and mean momentum equations, modification of radiative transfer, and an anthropogenic heat source. COAMPS is parallelized for both shared memory (OpenMP) and distributed memory (MPI) architecture.
Parameterizations in high resolution isopycanl wind-driven ocean models
Jensen, T.G.; Randall, D.A.
1993-01-01
For the CHAMMP project, we proposed to implement and test new numerical schemes, parameterizations of boundary layer flow and development and implement mixed layer physics in an existing isopycnal models. The objectives for the proposed research were; implement the Arakawa and Hsu, scheme in an existing isopycnal model of the Indian Ocean; recode the new model for a highly parallel architecture; determine effects of various parameterizations of islands; determine the correct lateral boundary condition for boundary layer currents, as for instance the Gulf Stream and other western boundary currents.; and incorporate a oceanic mixed layer on top of the isopycnal deep layers. This is, primarily a model development project, with emphasis on determining the influence and parameterization of narrow flows along continents and through chains of small islands on the large scale oceanic circulation, which is resolved by climate models. The new model is based on the multi-layer FSU Indian Ocean model. Our research strategy is to; recode a one-layer version of the Indian Ocean Model for a highly parallel computer; add thermodynamics to a rectangular domain version of the new model; implement the irregular domain from the Indian Ocean Model into the box model; change the numerical scheme for the continuity equation to the scheme proposed by; perform parameterization experiments with various coast line and island geometries. This report discusses project progress for period August 1, 1992 through December 31, 1992.
Validation of an Urban Parameterization in a Mesoscale Model
Leach, M.J.; Chin, H.
2001-07-19
The Atmospheric Science Division at Lawrence Livermore National Laboratory uses the Naval Research Laboratory's Couple Ocean-Atmosphere Mesoscale Prediction System (COAMPS) for both operations and research. COAMPS is a non-hydrostatic model, designed as a multi-scale simulation system ranging from synoptic down to meso, storm and local terrain scales. As model resolution increases, the forcing due to small-scale complex terrain features including urban structures and surfaces, intensifies. An urban parameterization has been added to the Naval Research Laboratory's mesoscale model, COAMPS. The parameterization attempts to incorporate the effects of buildings and urban surfaces without explicitly resolving them, and includes modeling the mean flow to turbulence energy exchange, radiative transfer, the surface energy budget, and the addition of anthropogenic heat. The Chemical and Biological National Security Program's (CBNP) URBAN field experiment was designed to collect data to validate numerical models over a range of length and time scales. The experiment was conducted in Salt Lake City in October 2000. The scales ranged from circulation around single buildings to flow in the entire Salt Lake basin. Data from the field experiment includes tracer data as well as observations of mean and turbulence atmospheric parameters. Wind and turbulence predictions from COAMPS are used to drive a Lagrangian particle model, the Livermore Operational Dispersion Integrator (LODI). Simulations with COAMPS and LODI are used to test the sensitivity to the urban parameterization. Data from the field experiment, including the tracer data and the atmospheric parameters, are also used to validate the urban parameterization.
The role of dataset selection in cloud microphysics parameterization development
NASA Astrophysics Data System (ADS)
Kogan, Y. L.
2009-12-01
A number of cloud microphysical parameterizations have been developed during the last decade using various datasets of cloud drop spectra. These datasets can be obtained either from observations, artificially produced by some drop size spectra generator (e.g. by solving the coagulation equation under different input conditions), or obtained as output of LES model which can predict cloud drop spectra explicitly. Each of the methods has its deficiencies, for example in-situ aircraft observations being constrained to the flight path and the dependence of coagulation equation solutions on input conditions. The ultimate aim is to create a cloud drop spectra dataset that mimics realistically drop parameters in real clouds. These parameters are closely related to the distribution of thermodynamical conditions, which are difficult, if not impossible, to obtain a priori. Using LES model with explicit microphysics (SAMEX) we have demonstrated high sensitivity of cloud parameterizations to the choice of a dataset. We emphasize that the development of accurate parameterizations should require the use of a dynamically balanced cloud drop spectra dataset. The accuracy of conversion rates can be increased by scaling them with precipitation intensity. We also demonstrate that the accuracy of the saturation adjustment scheme employed in calculations of latent heat release can be increased by accounting for the aerosol load. Finally we show how to formulate the new saturation adjustment in the framework of a two-moment cloud physics parameterization.
Scattering in Quantum Lattice Gases
NASA Astrophysics Data System (ADS)
O'Hara, Andrew; Love, Peter
2009-03-01
Quantum Lattice Gas Automata (QLGA) are of interest for their use in simulating quantum mechanics on both classical and quantum computers. QLGAs are an extension of classical Lattice Gas Automata where the constraint of unitary evolution is added. In the late 1990s, David A. Meyer as well as Bruce Boghosian and Washington Taylor produced similar models of QLGAs. We start by presenting a unified version of these models and study them from the point of view of the physics of wave-packet scattering. We show that the Meyer and Boghosian-Taylor models are actually the same basic model with slightly different parameterizations and limits. We then implement these models computationally using the Python programming language and show that QLGAs are able to replicate the analytic results of quantum mechanics (for example reflected and transmitted amplitudes for step potentials and the Klein paradox).
NASA Astrophysics Data System (ADS)
Khairoutdinov, Marat
2013-04-01
A multiscale-modeling framework (MMF) is the class of general circulation models (GCMs) in which the effects of unresolved-by-GCM-grid cloud processes are explicitly represented by a cloud-resolving model (CRM), also known as super-parameterization (SP), inserted into each column of the GCM grid. Traditionally, due to high computational cost, the SP in MMFs has usually been configured to run with grid spacings that are, in general, barely sufficient to represent deep and extensive convective systems. As the result, the effects of small shallow clouds and, to a lesser extent, mid-level clouds in MMFs have generally been underestimated. The situation is particularly aggravated by the notion that the shallow low clouds are believed to have particularly important feedbacks in the Earth's climate system. A simple decrease of horizontal grid spacing from a few kilometers to a few hundred meters keeping the domain size unchanged is prohibitive as it would increase the already high computational cost of running the MMF by about a factor of a hundred. One of the solutions, which is currently being explored by various modeling groups, is to use some sophisticated higher-order parameterization of shallow clouds; however, the whole premise of super-parameterization has been to minimize parameterization of cloud dynamics as much as possible under assumption that cloud feedbacks are better represented by the dynamically and physically consistent CRMs rather than by parameterizations based, for example, on the entraining-plume model. In this study, several global climate simulations are performed using the super-parameterized Community Atmosphere Model (SP-CAM) that employs an additional super-parameterization nicknamed (perhaps misleading) as MiniLES, to better represent the low-level shallow clouds with the horizontal grid spacing of a few hundred meters. In particular, the SP-CAM/MiniLES MMF seems to significantly improve the simulation of the observed low-cloud global
NASA Astrophysics Data System (ADS)
Liu, J.; Chen, Z.; Horowitz, L. W.; Carlton, A. M. G.; Fan, S.; Cheng, Y.; Ervens, B.; Fu, T. M.; He, C.; Tao, S.
2014-12-01
Secondary organic aerosols (SOA) have a profound influence on air quality and climate, but large uncertainties exist in modeling SOA on the global scale. In this study, five SOA parameterization schemes, including a two-product model (TPM), volatility basis-set (VBS) and three cloud SOA schemes (Ervens et al. (2008, 2014), Fu et al. (2008) , and He et al. (2013)), are implemented into the global chemical transport model (MOZART-4). For each scheme, model simulations are conducted with identical boundary and initial conditions. The VBS scheme produces the highest global annual SOA production (close to 35 Tg·y-1), followed by three cloud schemes (26-30 Tg·y-1) and TPM (23 Tg·y-1). Though sharing a similar partitioning theory to the TPM scheme, the VBS approach simulates the chemical aging of multiple generations of VOCs oxidation products, resulting in a much larger SOA source, particularly from aromatic species, over Europe, the Middle East and Eastern America. The formation of SOA in VBS, which represents the net partitioning of semi-volatile organic compounds from vapor to condensed phase, is highly sensitivity to the aging and wet removal processes of vapor-phase organic compounds. The production of SOA from cloud processes (SOAcld) is constrained by the coincidence of liquid cloud water and water-soluble organic compounds. Therefore, all cloud schemes resolve a fairly similar spatial pattern over the tropical and the mid-latitude continents. The spatiotemporal diversity among SOA parameterizations is largely driven by differences in precursor inputs. Therefore, a deeper understanding of the evolution, wet removal, and phase partitioning of semi-volatile organic compounds, particularly above remote land and oceanic areas, is critical to better constrain the global-scale distribution and related climate forcing of secondary organic aerosols.
Zhang, Yang; Zhang, Xin; Wang, Kai; He, Jian; Leung, Lai-Yung R.; Fan, Jiwen; Nenes, Athanasios
2015-07-22
Aerosol activation into cloud droplets is an important process that governs aerosol indirect effects. The advanced treatment of aerosol activation by Fountoukis and Nenes (2005) and its recent updates, collectively called the FN series, have been incorporated into a newly developed regional coupled climate-air quality model based on the Weather Research and Forecasting model with the physics package of the Community Atmosphere Model version 5 (WRF-CAM5) to simulate aerosol-cloud interactions in both resolved and convective clouds. The model is applied to East Asia for two full years of 2005 and 2010. A comprehensive model evaluation is performed for model predictions of meteorological, radiative, and cloud variables, chemical concentrations, and column mass abundances against satellite data and surface observations from air quality monitoring sites across East Asia. The model performs overall well for major meteorological variables including near-surface temperature, specific humidity, wind speed, precipitation, cloud fraction, precipitable water, downward shortwave and longwave radiation, and column mass abundances of CO, SO2, NO2, HCHO, and O3 in terms of both magnitudes and spatial distributions. Larger biases exist in the predictions of surface concentrations of CO and NOx at all sites and SO2, O3, PM2.5, and PM10 concentrations at some sites, aerosol optical depth, cloud condensation nuclei over ocean, cloud droplet number concentration (CDNC), cloud liquid and ice water path, and cloud optical thickness. Compared with the default Abdul-Razzack Ghan (2000) parameterization, simulations with the FN series produce ~107–113% higher CDNC, with half of the difference attributable to the higher aerosol activation fraction by the FN series and the remaining half due to feedbacks in subsequent cloud microphysical processes. With the higher CDNC, the FN series are more skillful in simulating cloud water path, cloud optical thickness, downward shortwave radiation
NASA Astrophysics Data System (ADS)
Zhang, Yang; Zhang, Xin; Wang, Kai; He, Jian; Leung, L. Ruby; Fan, Jiwen; Nenes, Athanasios
2015-07-01
Aerosol activation into cloud droplets is an important process that governs aerosol indirect effects. The advanced treatment of aerosol activation by Fountoukis and Nenes (2005) and its recent updates, collectively called the FN series, have been incorporated into a newly developed regional coupled climate-air quality model based on the Weather Research and Forecasting model with the physics package of the Community Atmosphere Model version 5 (WRF-CAM5) to simulate aerosol-cloud interactions in both resolved and convective clouds. The model is applied to East Asia for two full years of 2005 and 2010. A comprehensive model evaluation is performed for model predictions of meteorological, radiative, and cloud variables, chemical concentrations, and column mass abundances against satellite data and surface observations from air quality monitoring sites across East Asia. The model performs overall well for major meteorological variables including near-surface temperature, specific humidity, wind speed, precipitation, cloud fraction, precipitable water, downward shortwave and longwave radiation, and column mass abundances of CO, SO2, NO2, HCHO, and O3 in terms of both magnitudes and spatial distributions. Larger biases exist in the predictions of surface concentrations of CO and NOx at all sites and SO2, O3, PM2.5, and PM10 concentrations at some sites, aerosol optical depth, cloud condensation nuclei over ocean, cloud droplet number concentration (CDNC), cloud liquid and ice water path, and cloud optical thickness. Compared with the default Abdul-Razzack Ghan (2000) parameterization, simulations with the FN series produce ~107-113% higher CDNC, with half of the difference attributable to the higher aerosol activation fraction by the FN series and the remaining half due to feedbacks in subsequent cloud microphysical processes. With the higher CDNC, the FN series are more skillful in simulating cloud water path, cloud optical thickness, downward shortwave radiation
NASA Astrophysics Data System (ADS)
Liou, K. N.; Takano, Y.; He, C.; Yang, P.; Leung, L. R.; Gu, Y.; Lee, W. L.
2014-06-01
A stochastic approach has been developed to model the positions of BC (black carbon)/dust internally mixed with two snow grain types: hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine BC/dust single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), the action of internal mixing absorbs substantially more light than external mixing. The snow grain shape effect on absorption is relatively small, but its effect on asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2-5 µm) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 µm, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo substantially more than external mixing and that the snow grain shape plays a critical role in snow albedo calculations through its forward scattering strength. Also, multiple inclusion of BC/dust significantly reduces snow albedo as compared to an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization involving contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountain/snow topography.
NASA Astrophysics Data System (ADS)
Fox-Kemper, B.; Danabasoglu, G.; Ferrari, R.; Griffies, S. M.; Hallberg, R. W.; Holland, M. M.; Maltrud, M. E.; Peacock, S.; Samuels, B. L.
A parameterization for the restratification by finite-amplitude, submesoscale, mixed layer eddies, formulated as an overturning streamfunction, has been recently proposed to approximate eddy fluxes of density and other tracers. Here, the technicalities of implementing the parameterization in the coarse-resolution ocean component of global climate models are made explicit, and the primary impacts on model solutions of implementing the parameterization are discussed. Three global ocean general circulation models including this parameterization are contrasted with control simulations lacking the parameterization. The MLE parameterization behaves as expected and fairly consistently in models differing in discretization, boundary layer mixing, resolution, and other parameterizations. The primary impact of the parameterization is a shoaling of the mixed layer, with the largest effect in polar winter regions. Secondary impacts include strengthening the Atlantic meridional overturning while reducing its variability, reducing CFC and tracer ventilation, modest changes to sea surface temperature and air-sea fluxes, and an apparent reduction of sea ice basal melting.
Sensitivity analysis of volume scattering phase functions.
Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael
2016-08-01
To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m^{-3}. PMID:27505819
Parameterizing Coefficients of a POD-Based Dynamical System
NASA Technical Reports Server (NTRS)
Kalb, Virginia L.
2010-01-01
A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter
Parameterization of oceanic whitecap fraction based on satellite observations
NASA Astrophysics Data System (ADS)
Albert, M. F. M. A.; Anguelova, M. D.; Manders, A. M. M.; Schaap, M.; de Leeuw, G.
2015-08-01
In this study the utility of satellite-based whitecap fraction (W) values for the prediction of sea spray aerosol (SSA) emission rates is explored. More specifically, the study is aimed at improving the accuracy of the sea spray source function (SSSF) derived by using the whitecap method through the reduction of the uncertainties in the parameterization of W by better accounting for its natural variability. The starting point is a dataset containing W data, together with matching environmental and statistical data, for 2006. Whitecap fraction W was estimated from observations of the ocean surface brightness temperature TB by satellite-borne radiometers at two frequencies (10 and 37 GHz). A global scale assessment of the data set to evaluate the wind speed dependence of W revealed a quadratic correlation between W and U10, as well as a relatively larger spread in the 37 GHz data set. The latter could be attributed to secondary factors affecting W in addition to U10. To better visualize these secondary factors, a regional scale assessment over different seasons was performed. This assessment indicates that the influence of secondary factors on W is for the largest part imbedded in the exponent of the wind speed dependence. Hence no further improvement can be expected by looking at effects of other factors on the variation in W explicitly. From the regional analysis, a new globally applicable quadratic W(U10) parameterization was derived. An intrinsic correlation between W and U10 that could have been introduced while estimating W from TB was determined, evaluated and presumed to lie within the error margins of the newly derived W(U10) parameterization. The satellite-based parameterization was compared to parameterizations from other studies and was applied in a SSSF to estimate the global SSA emission rate. The thus obtained SSA production for 2006 of 4.1 × 1012 kg is within previously reported estimates. While recent studies that account for parameters other than U
Budget Comparison of Parameterized Microphysical Processes in Tropical Cyclone Simulations
NASA Astrophysics Data System (ADS)
Michelson, Sara A.; Bao, Jian-Wen; Grell, Evelyn D.
2015-04-01
Despite the fact that microphysics parameterization schemes used in numerical models for tropical cyclone (TC) prediction can be as complex as being capable of resolving the evolution of hydrometeor size spectra, operational centers still cannot computationally afford to run any TC prediction models with spectrum-resolving schemes operationally. To strike an optimal balance between computational cost and physical effect, there is a need to understand what minimal complexity of microphysics parameterizations is required in operational TC prediction models that are run at affordable resolutions. In order to address this need, we have been investigating whether or not the microphysics schemes currently used in NOAA's operational TC models are complex enough to enable us to use these models for high-resolution prediction of tropical cyclones. In this study, we used the Weather Research and Forecasting (WRF) model to investigate the impact of parameterized warm-rain processes in four widely-used bulk microphysics parameterization schemes on the model-simulated tropical cyclone (TC) development. The schemes investigated, ranging from a single-moment simple 3-category scheme to a complex double-moment 6-category scheme, produce different TC intensification rates and average vertical hydrometeor distributions, as well as different accumulated precipitation. By diagnosing the source and sink terms of the hydrometeor budget equations, we found that the differences in the warm-rain production rate, particularly by conversion of cloud water to rain water, contribute significantly to the variations in the frozen hydrometeor production and in the overall latent heat release above the freezing level. These differences in parameterized warm-rain production reflect the differences of the four schemes in the definition of rain droplet size distribution and consequently in spectrum-dependent microphysical processes, such as accretion growth of frozen hydrometeors and their
Lievens, Hans; Vernieuwe, Hilde; Alvarez-Mozos, Jesús; De Baets, Bernard; Verhoest, Niko E C
2009-01-01
In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration.
An intracloud lightning parameterization scheme for a storm electrification model
NASA Technical Reports Server (NTRS)
Helsdon, John H., Jr.; Wu, Gang; Farley, Richard D.
1992-01-01
The parameterization of an intracloud lightning discharge has been implemented in the present storm electrification model. The initiation, propagation direction, and termination of the discharge are computed using the magnitude and direction of the electric field vector as the determining criteria. The charge redistribution due to the lightning is approximated assuming the channel to be an isolated conductor with zero net charge over its entire length. Various simulations involving differing amounts of charge transferred and distribution of charges have been done. Values of charge transfer, dipole moment change, and electrical energy dissipation computed in the model are consistent with observations. The effects of the lightning-produced ions on the hydrometeor charges and electric field components depend strongly on the amount of charge transferred. A comparison between the measured electric field change of an actual intracloud flash and the field change due to the simulated discharge shows favorable agreement. Limitations of the parameterization scheme are discussed.
A parameterization of effective soil temperature for microwave emission
NASA Technical Reports Server (NTRS)
Choudhury, B. J.; Schmugge, T. J.; Mo, T. (Principal Investigator)
1981-01-01
A parameterization of effective soil temperature is discussed, which when multiplied by the emissivity gives the brightness temperature in terms of surface (T sub o) and deep (T sub infinity) soil temperatures as T = T sub infinity + C (T sub o - T sub infinity). A coherent radiative transfer model and a large data base of observed soil moisture and temperature profiles are used to calculate the best-fit value of the parameter C. For 2.8, 6.0, 11.0, 21.0 and 49.0 cm wavelengths. The C values are respectively 0.802 + or - 0.006, 0.667 + or - 0.008, 0.480 + or - 0.010, 0.246 + or - 0.009, and 0,084 + or - 0.005. The parameterized equation gives results which are generally within one or two percent of the exact values.
Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
2000-01-01
Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.
Parameterized neural networks for high-energy physics
NASA Astrophysics Data System (ADS)
Baldi, Pierre; Cranmer, Kyle; Faucett, Taylor; Sadowski, Peter; Whiteson, Daniel
2016-05-01
We investigate a new structure for machine learning classifiers built with neural networks and applied to problems in high-energy physics by expanding the inputs to include not only measured features but also physics parameters. The physics parameters represent a smoothly varying learning task, and the resulting parameterized classifier can smoothly interpolate between them and replace sets of classifiers trained at individual values. This simplifies the training process and gives improved performance at intermediate values, even for complex problems requiring deep learning. Applications include tools parameterized in terms of theoretical model parameters, such as the mass of a particle, which allow for a single network to provide improved discrimination across a range of masses. This concept is simple to implement and allows for optimized interpolatable results.
Parameterization of an Active Thermal Erosion Site, Caribou Creek, Alaska
NASA Astrophysics Data System (ADS)
Busey, R.; Bolton, W. R.; Cherry, J. E.; Hinzman, L. D.
2012-12-01
Thermokarst features are thought to be an important mechanism for landscape change in permafrost-dominated cold regions, but few such features have been incorporated into full featured landscape models. The root of this shortcoming is that historic observations are not detailed enough to parameterize a model, and the models typically do not include the relevant processes for thermal erosion. A new, dynamic thermokarst feature has been identified at the Caribou-Poker Creek Research Watershed (CPCRW) in the boreal forest of Interior Alaska. Located adjacent to a traditional use trail, this feature terminates directly in Caribou Creek. Erosion within the feature is driven predominantly by fluvial interflow. CPCRW is a Long-Term Ecological Research site underlain by varying degrees of relatively warm, discontinuous permafrost. This poster will describe the suite of measurements that have been undertaken to parameterize the ERODE model for this site, including thorough surveys, time lapse- and aerial photography, and 3-D structure from motion algorithms.
IR OPTICS MEASUREMENT WITH LINEAR COUPLING'S ACTION-ANGLE PARAMETERIZATION.
LUO, Y.; BAI, M.; PILAT, R.; SATOGATA, T.; TRBOJEVIC, D.
2005-05-16
A parameterization of linear coupling in action-angle coordinates is convenient for analytical calculations and interpretation of turn-by-turn (TBT) beam position monitor (BPM) data. We demonstrate how to use this parameterization to extract the twiss and coupling parameters in interaction regions (IRs), using BPMs on each side of the long IR drift region. The example of TBT BPM analysis was acquired at the Relativistic Heavy Ion Collider (RHIC), using an AC dipole to excite a single eigenmode. Besides the full treatment, a fast estimate of beta*, the beta function at the interaction point (IP), is provided, along with the phase advance between these BPMs. We also calculate and measure the waist of the beta function and the local optics.
On Parameterization of the Global Electric Circuit Generators
NASA Astrophysics Data System (ADS)
Slyunyaev, N. N.; Zhidkov, A. A.
2016-08-01
We consider the problem of generator parameterization in the global electric circuit (GEC) models. The relationship between the charge density and external current density distributions inside a thundercloud is studied using a one-dimensional description and a three-dimensional GEC model. It is shown that drastic conductivity variations in the vicinity of the cloud boundaries have a significant impact on the structure of the charge distribution inside the cloud. Certain restrictions on the charge density distribution in a realistic thunderstorm are found. The possibility to allow for conductivity inhomogeneities in the thunderstorm regions by introducing an effective external current density is demonstrated. Replacement of realistic thunderstorms with equivalent current dipoles in the GEC models is substantiated, an equation for the equivalent current is obtained, and the applicability range of this equation is analyzed. Relationships between the main GEC characteristics under variable parameterization of GEC generators are discussed.
Improved CART Data Products and 6cmm Parameterization for Clouds
Kenneth Sassen
2004-08-23
Reviewed here is the history of the participation in the Atmospheric Radiation Measurement (ARM) Program, with particular emphasis on research performed between 1999 and 2002, before the PI moved from the University of Utah to the University of Alaska, Fairbanks. The research results are divided into the following areas: IOP research, remote sensing algorithm development using datasets and models, cirrus cloud and SCM/GCM parameterizations, student training, and publications.
Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies
NASA Astrophysics Data System (ADS)
Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj
2016-04-01
In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.
Parameterization of wind farms in COSMO-LM
NASA Astrophysics Data System (ADS)
Stuetz, E.; Steinfeld, G.; Heinemann, D.; Peinke, J.
2012-04-01
In order to examine the impact of wind farms in the meso scale using numerical simulations parameterizations of wind farms were implemented in a mesoscale model. In 2008/2009 the first wind farm in the german exclusive economic zone - Alpha Ventus - was built. Since then more wind farms are erected in the german exclusive economic zone. Wind farms with up to 80 wind turbines and on an area up to 66 square kilometers are planned - partly only few kilometers apart from one another. Such large wind farms influence the properties of the atmospheric boundary layer at the meso scale by a reduction of the wind speed, a enhancement of the turbulent kinetic energy, but also an alternation of the wind direction. Results of models for the calculation of wakes (wake models), idealistic mesoscale studies as well as observations show, that wind farms of this size produce wakes, which can expand up to a few 10 kilometers downstream. Mesoscale models provide the possibility to investigate the impact of such large wind farms on the atmospheric flow in a larger area and also to examine the effect of wind farms under different weather conditions. For the numerical simulation the mesoscale model COSMO-LM is used. Because the wind turbines of the wind farm cannot be displayed individually due to the large mesh-grid size, the effects of the wind turbine in a numerical model have to be described with the help of a parameterization. Different parameterizations, including the interpretation of a wind farm as enhanced surface roughness or as an impuls deficit and turbulence source, respectively, are implemented into COSMO. The impact of the different wind farm parameterizations on the simulation of the atmospheric boundary layer are presented. as well as first tests of idealistic simulations of wind farms are presented. For this purpose idealistic runs as well as a case study were performed.
Physically admissible parameterization for differential Mueller matrix of uniform media.
Devlaminck, Vincent; Terrier, Patrick; Charbois, Jean-Michel
2013-05-01
In this Letter, we address the question of physical validity of differential Mueller matrix. A parameterization of entries of this differential matrix is proposed. It ensures that the generators associated with depolarization terms lead to physical Mueller matrices as for the nondepolarizing terms. A general expression for the depolarizing part of the differential matrix is found and a way to compute the nonlinear relations between the parameters is proposed.
A parameterization of the depth of the entrainment zone
NASA Technical Reports Server (NTRS)
Boers, Reinout
1989-01-01
A theory of the parameterization of the entrainment zone depth has been developed based on conservation of energy. This theory suggests that the normalized entrainment zone depth is proportional to the inverse square root of the Richardson number. A comparison of this theory with atmospheric observations indicates excellent agreement. It does not adequately predict the laboratory data, although it improves on parcel theory, which is based on a momentum balance.
Data-driven RBE parameterization for helium ion beams.
Mairani, A; Magro, G; Dokic, I; Valle, S M; Tessonnier, T; Galm, R; Ciocca, M; Parodi, K; Ferrari, A; Jäkel, O; Haberer, T; Pedroni, P; Böhlen, T T
2016-01-21
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter (α/β)ph of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the RBEα = αHe/αph and Rβ = βHe/βph ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (RBE10) are compared with the experimental ones. Pearson's correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with (α/β)ph = 5.4 Gy at the entrance of a 56.4 MeV u(-1)He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and (α/β)ph as input parameters is proposed, allowing a straightforward implementation in a TP system.
New parameterizations and sensitivities for simple climate models
NASA Technical Reports Server (NTRS)
Graves, Charles E.; Lee, Wan-Ho; North, Gerald R.
1993-01-01
This paper presents a reexamination of the earth radiation budget parameterization of energy balance climate models in light of data collected over the last 12 years. The study consists of three parts: (1) an examination of the infrared terrestrial radiation to space and its relationship to the surface temperature field on time scales from 1 month to 10 years; (2) an examination of the albedo of the earth with special attention to the seasonal cycle of snow and clouds; (3) solutions for the seasonal cycle using the new parameterizations with special attention to changes in sensitivity. While the infrared parameterization is not dramatically different from that used in the past, the albedo in the new data suggest that a stronger latitude dependence be employed. After retuning the diffusion coefficient the simulation results for the present climate generally show only a slight dependence on the new parameters. Also, the sensitivity parameter for the model is still about the same (1.25 C for a 1 percent increase of solar constant) for the linear models and for the nonlinear models that include a seasonal snow line albedo feedback (1.34 C). One interesting feature is that a clear-sky planet with a snow line albedo feedback has a significantly higher sensitivity (2.57 C) due to the absence of smoothing normally occurring in the presence of average cloud cover.
UQ-Guided Selection of Physical Parameterizations in Climate Models
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Debusschere, B.; Ghan, S.; Rosa, D.; Bulaevskaya, V.; Anderson, G. J.; Chowdhary, K.; Qian, Y.; Lin, G.; Larson, V. E.; Zhang, G. J.; Randall, D. A.
2015-12-01
Given two or more parameterizations that represent the same physical process in a climate model, scientists are sometimes faced with difficult decisions about which scheme to choose for their simulations and analysis. These decisions are often based on subjective criteria, such as "which scheme is easier to use, is computationally less expensive, or produces results that look better?" Uncertainty quantification (UQ) and model selection methods can be used to objectively rank the performance of different physical parameterizations by increasing the preference for schemes that fit observational data better, while at the same time penalizing schemes that are overly complex or have excessive degrees-of-freedom. Following these principles, we are developing a perturbed-parameter UQ framework to assist in the selection of parameterizations for a climate model. Preliminary results will be presented on the application of the framework to assess the performance of two alternate schemes for simulating tropical deep convection (CLUBB-SILHS and ZM-trigmem) in the U.S. Dept. of Energy's ACME climate model. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, is supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC), and is released as LLNL-ABS-675799.
Synthesis of Entrainment and Detrainment formulations for Convection Parameterizations
NASA Astrophysics Data System (ADS)
Siebesma, P.
2015-12-01
Mixing between convective clouds and its environment, usually parameterized in terms of entrainment and detrainment, are among the most important processes that determine the strength of the climate model sensitivity. This notion has led to a renaissance of research in exploring the mechanisms of these mixing processes and, as a result, to a wide range of seemingly different parameterized formulations. In this study we are aiming to synthesize these results as to offer a solid framework for use in parameterized formulations of convection. Detailed LES analyses in which clouds are subsampled according to their size show that entrainment rates are inversely proportional to the typical cloud radius, in accordance with original entraining plume models. These results can be shown analytically to be consistent with entrainment rate formulations of cloud ensembles that decrease inversely proportional with height, by making only mild assumptions on the shape of the associated cloud size distribution. In addition there are additional dependencies of the entrainment rates on the environmental thermodynamics such as the relative humidity and stability but these are of second order. In contrast detrainment rates do depend to first order on the environmental thermodynamics such as relative humidity and stability. This can be understood by realizing that i) the details of the cloud size distribution do depend on these environmental factors and ii) that detrainment rates have a much stronger dependency on the shape of the cloud size distribution than entrainment rates.
A satellite observation test bed for cloud parameterization development
NASA Astrophysics Data System (ADS)
Lebsock, M. D.; Suselj, K.
2015-12-01
We present an observational test-bed of cloud and precipitation properties derived from CloudSat, CALIPSO, and the the A-Train. The focus of the test-bed is on marine boundary layer clouds including stratocumulus and cumulus and the transition between these cloud regimes. Test-bed properties include the cloud cover and three dimensional cloud fraction along with the cloud water path and precipitation water content, and associated radiative fluxes. We also include the subgrid scale distribution of cloud and precipitation, and radiaitive quantities, which must be diagnosed by a model parameterization. The test-bed further includes meterological variables from the Modern Era Retrospective-analysis for Research and Applications (MERRA). MERRA variables provide the initialization and forcing datasets to run a parameterization in Single Column Model (SCM) mode. We show comparisons of an Eddy-Diffusivity/Mass-FLux (EDMF) parameterization coupled to micorphsycis and macrophysics packages run in SCM mode with observed clouds. Comparsions are performed regionally in areas of climatological subsidence as well stratified by dynamical and thermodynamical variables. Comparisons demonstrate the ability of the EDMF model to capture the observed transitions between subtropical stratocumulus and cumulus cloud regimes.
A parameterization method and application in breast tomosynthesis dosimetry
Li, Xinhua; Zhang, Da; Liu, Bob
2013-09-15
Purpose: To present a parameterization method based on singular value decomposition (SVD), and to provide analytical parameterization of the mean glandular dose (MGD) conversion factors from eight references for evaluating breast tomosynthesis dose in the Mammography Quality Standards Act (MQSA) protocol and in the UK, European, and IAEA dosimetry protocols.Methods: MGD conversion factor is usually listed in lookup tables for the factors such as beam quality, breast thickness, breast glandularity, and projection angle. The authors analyzed multiple sets of MGD conversion factors from the Hologic Selenia Dimensions quality control manual and seven previous papers. Each data set was parameterized using a one- to three-dimensional polynomial function of 2–16 terms. Variable substitution was used to improve accuracy. A least-squares fit was conducted using the SVD.Results: The differences between the originally tabulated MGD conversion factors and the results computed using the parameterization algorithms were (a) 0.08%–0.18% on average and 1.31% maximum for the Selenia Dimensions quality control manual, (b) 0.09%–0.66% on average and 2.97% maximum for the published data by Dance et al. [Phys. Med. Biol. 35, 1211–1219 (1990); ibid. 45, 3225–3240 (2000); ibid. 54, 4361–4372 (2009); ibid. 56, 453–471 (2011)], (c) 0.74%–0.99% on average and 3.94% maximum for the published data by Sechopoulos et al. [Med. Phys. 34, 221–232 (2007); J. Appl. Clin. Med. Phys. 9, 161–171 (2008)], and (d) 0.66%–1.33% on average and 2.72% maximum for the published data by Feng and Sechopoulos [Radiology 263, 35–42 (2012)], excluding one sample in (d) that does not follow the trends in the published data table.Conclusions: A flexible parameterization method is presented in this paper, and was applied to breast tomosynthesis dosimetry. The resultant data offer easy and accurate computations of MGD conversion factors for evaluating mean glandular breast dose in the MQSA
A stochastic parameterization for deep convection using cellular automata
NASA Astrophysics Data System (ADS)
Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.
2012-12-01
Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in
Liou, K. N.; Takano, Y.; He, Cenlin; Yang, P.; Leung, Lai-Yung R.; Gu, Y.; Lee, W- L.
2014-06-27
A stochastic approach to model the positions of BC/dust internally mixed with two snow-grain types has been developed, including hexagonal plate/column (convex) and Koch snowflake (concave). Subsequently, light absorption and scattering analysis can be followed by means of an improved geometric-optics approach coupled with Monte Carlo photon tracing to determine their single-scattering properties. For a given shape (plate, Koch snowflake, spheroid, or sphere), internal mixing absorbs more light than external mixing. The snow-grain shape effect on absorption is relatively small, but its effect on the asymmetry factor is substantial. Due to a greater probability of intercepting photons, multiple inclusions of BC/dust exhibit a larger absorption than an equal-volume single inclusion. The spectral absorption (0.2 – 5 um) for snow grains internally mixed with BC/dust is confined to wavelengths shorter than about 1.4 um, beyond which ice absorption predominates. Based on the single-scattering properties determined from stochastic and light absorption parameterizations and using the adding/doubling method for spectral radiative transfer, we find that internal mixing reduces snow albedo more than external mixing and that the snow-grain shape plays a critical role in snow albedo calculations through the asymmetry factor. Also, snow albedo reduces more in the case of multiple inclusion of BC/dust compared to that of an equal-volume single sphere. For application to land/snow models, we propose a two-layer spectral snow parameterization containing contaminated fresh snow on top of old snow for investigating and understanding the climatic impact of multiple BC/dust internal mixing associated with snow grain metamorphism, particularly over mountains/snow topography.
Modeling the clouds on Venus: model development and improvement of a nucleation parameterization
NASA Astrophysics Data System (ADS)
Määttänen, Anni; Bekki, Slimane; Vehkamäki, Hanna; Julin, Jan; Montmessin, Franck; Ortega, Ismael K.; Lebonnois, Sébastien
2014-05-01
As both the clouds of Venus and aerosols in the Earth's stratosphere are composed of sulfuric acid droplets, we use the 1-D version of a model [1,4] developed for stratospheric aerosols and clouds to study the clouds on Venus. We have removed processes and compounds related to the stratospheric clouds so that the only species remaining are water and sulfuric acid, corresponding to the stratospheric sulfate aerosols, and we have added some key processes. The model describes microphysical processes including condensation/evaporation, and sedimentation. Coagulation, turbulent diffusion, and a parameterization for two-component nucleation [8] of water and sulfuric acid have been added in the model. Since the model describes explicitly the size distribution with a large number of size bins (50-500), it can handle multiple particle modes. The validity ranges of the existing nucleation parameterization [7] have been improved to cover a larger temperature range, and the very low relative humidity (RH) and high sulfuric acid concentrations found in the atmosphere of Venus. We have made several modifications to improve the 2002 nucleation parameterization [7], most notably ensuring that the two-component nucleation model behaves as predicted by the analytical studies at the one-component limit reached at extremely low RH. We have also chosen to use a self-consistent cluster distribution [9], constrained by scaling it to recent quantum chemistry calculations [3]. First tests of the cloud model have been carried out with temperature profiles from VIRA [2] and from the LMD Venus GCM [5], and with a compilation of water vapor and sulfuric acid profiles, as in [6]. The temperature and pressure profiles do not evolve with time, but the vapour profiles naturally change with the cloud. However, no chemistry is included for the moment, so the vapor concentrations are only dependent on the microphysical processes. The model has been run for several hundreds of Earth days to reach a
Roupakias, S; Mitsakou, P; Nimer, A Al
2011-03-01
Ticks are blood feeding external parasites which can cause local and systemic complications to human body. A lot of tick-borne human diseases include Lyme disease and virus encephalitis, can be transmitted by a tick bite. Also secondary bacterial skin infection, reactive manifestations against tick allergens, and granuloma's formation can be occurred. Tick paralysis is a relatively rare complication but it can be fatal. Except the general rules for tick bite prevention, any tick found should be immediately and completely removed alive. Furthermore, the tick removal technique should not allow or provoke the escape of infective body fluids through the tick into the wound site, and disclose any local complication. Many methods of tick removal (a lot of them are unsatisfactory and/or dangerous) have been reported in the literature, but there is very limited experimental evidence to support these methods. No technique will remove completely every tick. So, there is not an appropriate and absolutely effective and/or safe tick removal technique. Regardless of the used tick removal technique, clinicians should be aware of the clinical signs of tick-transmitted diseases, the public should be informed about the risks and the prevention of tick borne diseases, and persons who have undergone tick removal should be monitored up to 30 days for signs and symptoms. PMID:21710824
NASA Astrophysics Data System (ADS)
Biggs, Nicholas R. T.; Willmott, Andrew J.
This paper develops a time-dependent, two-dimensional model for the opening of a coastal polynya. The model incorporates a parameterization for the collection thickness of frazil ice at the polynya edge that is given in terms of (a) the depth of frazil ice arriving at the polynya edge, (b) the component normal to the polynya edge of the frazil ice velocity relative to the consolidated new ice velocity, and (c) a constant depth term ( hw) associated with wave radiation stress. The last term depends upon the wavelength of surface waves that are most readily generated by the wind stress, and for coastal polynyas is shown to be of the order of 5 cm. The inclusion of hw also removes possible cases of the parameterization being non-robust in the unsteady problem. Polynya opening solutions are calculated adjacent to a straight coastal barrier of finite length D, by numerically integrating Charpit's equations, a generalisation of the method of characteristics. Polynya opening times are compared with those in a constant collection depth model, when both models open to polynyas with identical steady-state area. For "long" islands ( D≫alongshore adjustment length scale La), the opening time T obeys T> Tc, where Tc is the constant collection depth opening time; when D≪ La, the inequality is reversed. Finally, month by month simulations of the opening of the St. Lawrence Island Polynya (SLIP) are presented, for which satellite-derived steady-state areas are available. In most simulations, the simulated steady-state area falls within the 90% confidence limits of the observed area.
eblur/dust: a modular python approach for dust extinction and scattering
NASA Astrophysics Data System (ADS)
Corrales, Lia
2016-03-01
I will present a library of python codes -- github.com/eblur/dust -- which calculate dust scattering and extinction properties from the IR to the X-ray. The modular interface allows for custom defined dust grain size distributions, optical constants, and scattering physics. These codes are currently undergoing a major overhaul to include multiple scattering effects, parallel processing, parameterized grain size distributions beyond power law, and optical constants for different grain compositions. I use eblur/dust primarily to study dust scattering images in the X-ray, but they may be extended to applications at other wavelengths.
Data-driven RBE parameterization for helium ion beams
NASA Astrophysics Data System (ADS)
Mairani, A.; Magro, G.; Dokic, I.; Valle, S. M.; Tessonnier, T.; Galm, R.; Ciocca, M.; Parodi, K.; Ferrari, A.; Jäkel, O.; Haberer, T.; Pedroni, P.; Böhlen, T. T.
2016-01-01
Helium ion beams are expected to be available again in the near future for clinical use. A suitable formalism to obtain relative biological effectiveness (RBE) values for treatment planning (TP) studies is needed. In this work we developed a data-driven RBE parameterization based on published in vitro experimental values. The RBE parameterization has been developed within the framework of the linear-quadratic (LQ) model as a function of the helium linear energy transfer (LET), dose and the tissue specific parameter {{(α /β )}\\text{ph}} of the LQ model for the reference radiation. Analytic expressions are provided, derived from the collected database, describing the \\text{RB}{{\\text{E}}α}={α\\text{He}}/{α\\text{ph}} and {{\\text{R}}β}={β\\text{He}}/{β\\text{ph}} ratios as a function of LET. Calculated RBE values at 2 Gy photon dose and at 10% survival (\\text{RB}{{\\text{E}}10} ) are compared with the experimental ones. Pearson’s correlation coefficients were, respectively, 0.85 and 0.84 confirming the soundness of the introduced approach. Moreover, due to the lack of experimental data at low LET, clonogenic experiments have been performed irradiating A549 cell line with {{(α /β )}\\text{ph}}=5.4 Gy at the entrance of a 56.4 MeV u-1He beam at the Heidelberg Ion Beam Therapy Center. The proposed parameterization reproduces the measured cell survival within the experimental uncertainties. A RBE formula, which depends only on dose, LET and {{(α /β )}\\text{ph}} as input parameters is proposed, allowing a straightforward implementation in a TP system.
Mechanistic Parameterization of the Kinomic Signal in Peptide Arrays
Dussaq, Alex; Anderson, Joshua C; Willey, Christopher D; Almeida, Jonas S
2016-01-01
Kinases play a role in every cellular process involved in tumorigenesis ranging from proliferation, migration, and protein synthesis to DNA repair. While genetic sequencing has identified most kinases in the human genome, it does not describe the ‘kinome’ at the level of activity of kinases against their substrate targets. An attempt to address that limitation and give researchers a more direct view of cellular kinase activity is found in the PamGene PamChip® system, which records and compares the phosphorylation of 144 tyrosine or serine/threonine peptides as they are phosphorylated by cellular kinases. Accordingly, the kinetics of this time dependent kinomic signal needs to be well understood in order to transduce a parameter set into an accurate and meaningful mathematical model. Here we report the analysis and mathematical modeling of kinomic time series, which achieves a more accurate description of the accumulation of phosphorylated product than the current model, which assumes first order enzyme-substrate kinetics. Reproducibility of the proposed solution was of particular attention. Specifically, the non-linear parameterization procedure is delivered as a public open source web application where kinomic time series can be accurately decomposed into the model’s two parameter values measuring phosphorylation rate and capacity. The ability to deliver model parameterization entirely as a client side web application is an important result on its own given increasing scientific preoccupation with reproducibility. There is also no need for a potentially transitory and opaque server-side component maintained by the authors, nor of exchanging potentially sensitive data as part of the model parameterization process since the code is transferred to the browser client where it can be inspected and executed.
Mechanistic Parameterization of the Kinomic Signal in Peptide Arrays
Dussaq, Alex; Anderson, Joshua C; Willey, Christopher D; Almeida, Jonas S
2016-01-01
Kinases play a role in every cellular process involved in tumorigenesis ranging from proliferation, migration, and protein synthesis to DNA repair. While genetic sequencing has identified most kinases in the human genome, it does not describe the ‘kinome’ at the level of activity of kinases against their substrate targets. An attempt to address that limitation and give researchers a more direct view of cellular kinase activity is found in the PamGene PamChip® system, which records and compares the phosphorylation of 144 tyrosine or serine/threonine peptides as they are phosphorylated by cellular kinases. Accordingly, the kinetics of this time dependent kinomic signal needs to be well understood in order to transduce a parameter set into an accurate and meaningful mathematical model. Here we report the analysis and mathematical modeling of kinomic time series, which achieves a more accurate description of the accumulation of phosphorylated product than the current model, which assumes first order enzyme-substrate kinetics. Reproducibility of the proposed solution was of particular attention. Specifically, the non-linear parameterization procedure is delivered as a public open source web application where kinomic time series can be accurately decomposed into the model’s two parameter values measuring phosphorylation rate and capacity. The ability to deliver model parameterization entirely as a client side web application is an important result on its own given increasing scientific preoccupation with reproducibility. There is also no need for a potentially transitory and opaque server-side component maintained by the authors, nor of exchanging potentially sensitive data as part of the model parameterization process since the code is transferred to the browser client where it can be inspected and executed. PMID:27601856
CCPP-ARM Parameterization Testbed Model Forecast Data
Klein, Stephen
2008-01-15
Dataset contains the NCAR CAM3 (Collins et al., 2004) and GFDL AM2 (GFDL GAMDT, 2004) forecast data at locations close to the ARM research sites. These data are generated from a series of multi-day forecasts in which both CAM3 and AM2 are initialized at 00Z every day with the ECMWF reanalysis data (ERA-40), for the year 1997 and 2000 and initialized with both the NASA DAO Reanalyses and the NCEP GDAS data for the year 2004. The DOE CCPP-ARM Parameterization Testbed (CAPT) project assesses climate models using numerical weather prediction techniques in conjunction with high quality field measurements (e.g. ARM data).
The causal structure of spacetime is a parameterized Randers geometry
NASA Astrophysics Data System (ADS)
Skakala, Jozef; Visser, Matt
2011-03-01
There is a well-established isomorphism between stationary four-dimensional spacetimes and three-dimensional purely spatial Randers geometries—these Randers geometries being a particular case of the more general class of three-dimensional Finsler geometries. We point out that in stably causal spacetimes, by using the (time-dependent) ADM decomposition, this result can be extended to general non-stationary spacetimes—the causal structure (conformal structure) of the full spacetime is completely encoded in a parameterized (t-dependent) class of Randers spaces, which can then be used to define a Fermat principle, and also to reconstruct the null cones and causal structure.
Modeling and parameterization of horizontally inhomogeneous cloud radiative properties
NASA Technical Reports Server (NTRS)
Welch, R. M.
1995-01-01
One of the fundamental difficulties in modeling cloud fields is the large variability of cloud optical properties (liquid water content, reflectance, emissivity). The stratocumulus and cirrus clouds, under special consideration for FIRE, exhibit spatial variability on scales of 1 km or less. While it is impractical to model individual cloud elements, the research direction is to model a statistical ensembles of cloud elements with mean-cloud properties specified. The major areas of this investigation are: (1) analysis of cloud field properties; (2) intercomparison of cloud radiative model results with satellite observations; (3) radiative parameterization of cloud fields; and (4) development of improved cloud classification algorithms.
Parameterization of interatomic potential by genetic algorithms: A case study
Ghosh, Partha S. Arya, A.; Dey, G. K.; Ranawat, Y. S.
2015-06-24
A framework for Genetic Algorithm based methodology is developed to systematically obtain and optimize parameters for interatomic force field functions for MD simulations by fitting to a reference data base. This methodology is applied to the fitting of ThO{sub 2} (CaF{sub 2} prototype) – a representative of ceramic based potential fuel for nuclear applications. The resulting GA optimized parameterization of ThO{sub 2} is able to capture basic structural, mechanical, thermo-physical properties and also describes defect structures within the permissible range.
A Parameterized Web-Based Testing Model for Project Management
NASA Astrophysics Data System (ADS)
Bodea, Constanta-Nicoleta; Dascalu, Maria
This paper proposes a web-based testing model for project management. The model is based on ontology for encoding project management knowledge, so it is able to facilitate resource extraction in the web-based testware environment. It also allows generation of parameterized tests, according to the targeted difficulty level. The authors present the theoretical approaches that led to the model: semantic nets and concept space graphs have an important role in model designing. The development of the ontology model is made with SemanticWorks software. The test ontology has applicability in project management certification, especially in those systems with different levels, as the IPMA four-level certification system.
Improving Bulk Microphysics Parameterizations in Simulations of Aerosol Effects
Wang, Yuan; Fan, Jiwen; Zhang, Renyi; Leung, Lai-Yung R.; Franklin, Charmaine N.
2013-06-05
To improve the microphysical parameterizations for simulations of the aerosol indirect effect (AIE) in regional and global climate models, a double-moment bulk microphysical scheme presently implemented in the Weather Research and Forecasting (WRF) model is modified and the results are compared against atmospheric observations and simulations produced by a spectral bin microphysical scheme (SBM). Rather than using prescribed aerosols as in the original bulk scheme (Bulk-OR), a prognostic doublemoment aerosol representation is introduced to predict both the aerosol number concentration and mass mixing ratio (Bulk-2M). The impacts of the parameterizations of diffusional growth and autoconversion and the selection of the embryonic raindrop radius on the performance of the bulk microphysical scheme are also evaluated. Sensitivity modeling experiments are performed for two distinct cloud regimes, maritime warm stratocumulus clouds (SC) over southeast Pacific Ocean from the VOCALS project and continental deep convective clouds (DCC) in the southeast of China from the Department of Energy/ARM Mobile Facility (DOE/AMF) - China field campaign. The results from Bulk-2M exhibit a much better agreement in the cloud number concentration and effective droplet radius in both the SC and DCC cases with those from SBM and field measurements than those from Bulk-OR. In the SC case particularly, Bulk-2M reproduces the observed drizzle precipitation, which is largely inhibited in Bulk-OR. Bulk-2M predicts enhanced precipitation and invigorated convection with increased aerosol loading in the DCC case, consistent with the SBM simulation, while Bulk-OR predicts the opposite behaviors. Sensitivity experiments using four different types of autoconversion schemes reveal that the autoconversion parameterization is crucial in determining the raindrop number, mass concentration, and drizzle formation for warm 2 stratocumulus clouds. An embryonic raindrop size of 40 μm is determined as a more
... are small, insect-like creatures that live in woods and fields. They attach to you as you ... your clothes and skin often while in the woods. After returning home: Remove your clothes. Look closely ...
Adatto, Maurice A; Halachmi, Shlomit; Lapidoth, Moshe
2011-01-01
Over 50,000 new tattoos are placed each year in the United States. Studies estimate that 24% of American college students have tattoos and 10% of male American adults have a tattoo. The rising popularity of tattoos has spurred a corresponding increase in tattoo removal. Not all tattoos are placed intentionally or for aesthetic reasons though. Traumatic tattoos due to unintentional penetration of exogenous pigments can also occur, as well as the placement of medical tattoos to mark treatment boundaries, for example in radiation therapy. Protocols for tattoo removal have evolved over history. The first evidence of tattoo removal attempts was found in Egyptian mummies, dated to have lived 4,000 years BC. Ancient Greek writings describe tattoo removal with salt abrasion or with a paste containing cloves of white garlic mixed with Alexandrian cantharidin. With the advent of Q-switched lasers in the late 1960s, the outcomes of tattoo removal changed radically. In addition to their selective absorption by the pigment, the extremely short pulse duration of Q-switched lasers has made them the gold standard for tattoo removal.
Adatto, Maurice A; Halachmi, Shlomit; Lapidoth, Moshe
2011-01-01
Over 50,000 new tattoos are placed each year in the United States. Studies estimate that 24% of American college students have tattoos and 10% of male American adults have a tattoo. The rising popularity of tattoos has spurred a corresponding increase in tattoo removal. Not all tattoos are placed intentionally or for aesthetic reasons though. Traumatic tattoos due to unintentional penetration of exogenous pigments can also occur, as well as the placement of medical tattoos to mark treatment boundaries, for example in radiation therapy. Protocols for tattoo removal have evolved over history. The first evidence of tattoo removal attempts was found in Egyptian mummies, dated to have lived 4,000 years BC. Ancient Greek writings describe tattoo removal with salt abrasion or with a paste containing cloves of white garlic mixed with Alexandrian cantharidin. With the advent of Q-switched lasers in the late 1960s, the outcomes of tattoo removal changed radically. In addition to their selective absorption by the pigment, the extremely short pulse duration of Q-switched lasers has made them the gold standard for tattoo removal. PMID:21865802
An updated subgrid orographic parameterization for global atmospheric forecast models
NASA Astrophysics Data System (ADS)
Choi, Hyun-Joo; Hong, Song-You
2015-12-01
A subgrid orographic parameterization (SOP) is updated by including the effects of orographic anisotropy and flow-blocking drag (FBD). The impact of the updated SOP on short-range forecasts is investigated using a global atmospheric forecast model applied to a heavy snowfall event over Korea on 4 January 2010. When the SOP is updated, the orographic drag in the lower troposphere noticeably increases owing to the additional FBD over mountainous regions. The enhanced drag directly weakens the excessive wind speed in the low troposphere and indirectly improves the temperature and mass fields over East Asia. In addition, the snowfall overestimation over Korea is improved by the reduced heat fluxes from the surface. The forecast improvements are robust regardless of the horizontal resolution of the model between T126 and T510. The parameterization is statistically evaluated based on the skill of the medium-range forecasts for February 2014. For the medium-range forecasts, the skill improvements of the wind speed and temperature in the low troposphere are observed globally and for East Asia while both positive and negative effects appear indirectly in the middle-upper troposphere. The statistical skill for the precipitation is mostly improved due to the improvements in the synoptic fields. The improvements are also found for seasonal simulation throughout the troposphere and stratosphere during boreal winter.
Evaluation of an Urban Canopy Parameterization in a Mesoscale Model
Chin, H S; Leach, M J; Sugiyama, G A; Leone, Jr., J M; Walker, H; Nasstrom, J; Brown, M J
2004-03-18
A modified urban canopy parameterization (UCP) is developed and evaluated in a three-dimensional mesoscale model to assess the urban impact on surface and lower atmospheric properties. This parameterization accounts for the effects of building drag, turbulent production, radiation balance, anthropogenic heating, and building rooftop heating/cooling. USGS land-use data are also utilized to derive urban infrastructure and urban surface properties needed for driving the UCP. An intensive observational period with clear-sky, strong ambient wind and drainage flow, and the absence of land-lake breeze over the Salt Lake Valley, occurring on 25-26 October 2000, is selected for this study. A series of sensitivity experiments are performed to gain understanding of the urban impact in the mesoscale model. Results indicate that within the selected urban environment, urban surface characteristics and anthropogenic heating play little role in the formation of the modeled nocturnal urban boundary layer. The rooftop effect appears to be the main contributor to this urban boundary layer. Sensitivity experiments also show that for this weak urban heat island case, the model horizontal grid resolution is important in simulating the elevated inversion layer. The root mean square errors of the predicted wind and temperature with respect to surface station measurements exhibit substantially larger discrepancies at the urban locations than the rural counterparts. However, the close agreement of modeled tracer concentration with observations fairly justifies the modeled urban impact on the wind direction shift and wind drag effects.
Comparison of parameterizations for homogeneous and heterogeneous ice nucleation
NASA Astrophysics Data System (ADS)
Koop, T.; Zobrist, B.
2009-04-01
The formation of ice particles from liquid aqueous aerosols is of central importance for the physics and chemistry of high altitude clouds. In this paper, we present new laboratory data on ice nucleation and compare them with two different parameterizations for homogeneous as well as heterogeneous ice nucleation. In particular, we discuss and evaluate the effect of solutes and ice nuclei. One parameterization is the Î»-approach which correlates the depression of the freezing temperature of aqueous droplets in comparison to pure water droplets, Tf, with the corresponding depression, Tm, of the equilibrium ice melting point: Tf = Î» × Tm. Here, Î» is independent of concentration and a constant that is specific for a particular solute or solute/ice nucleus combination. The other approach is water-activity-based ice nucleation theory which describes the effects of solutes on the freezing temperature Tf via their effect on water activity: aw(Tf) = awi(Tf) + aw. Here, awi is the water activity of ice and aw is a constant that depends on the ice nucleus but is independent of the type of solute. We present new data on both homogeneous and heterogeneous ice nucleation with varying types of solutes and ice nuclei. We evaluate and discuss the advantages and limitations of the two approaches for the prediction of ice nucleation in laboratory experiments and atmospheric cloud models.
A Parameterization for the Triggering of Landscape Generated Moist Convection
NASA Technical Reports Server (NTRS)
Lynn, Barry H.; Tao, Wei-Kuo; Abramopoulos, Frank
1998-01-01
A set of relatively high resolution three-dimensional (3D) simulations were produced to investigate the triggering of moist convection by landscape generated mesoscale circulations. The local accumulated rainfall varied monotonically (linearly) with the size of individual landscape patches, demonstrating the need to develop a trigger function that is sensitive to the size of individual patches. A new triggering function that includes the effect of landscapes generated mesoscale circulations over patches of different sizes consists of a parcel's perturbation in vertical velocity (nu(sub 0)), temperature (theta(sub 0)), and moisture (q(sub 0)). Each variable in the triggering function was also sensitive to soil moisture gradients, atmospheric initial conditions, and moist processes. The parcel's vertical velocity, temperature, and moisture perturbation were partitioned into mesoscale and turbulent components. Budget equations were derived for theta(sub 0) and q(sub 0). Of the many terms in this set of budget equations, the turbulent, vertical flux of the mesoscale temperature and moisture contributed most to the triggering of moist convection through the impact of these fluxes on the parcel's temperature and moisture profile. These fluxes needed to be parameterized to obtain theta(sub 0) and q(sub 0). The mesoscale vertical velocity also affected the profile of nu(sub 0). We used similarity theory to parameterize these fluxes as well as the parcel's mesoscale vertical velocity.
Parameterization of Vegetation Aerodynamic Roughness of Natural Regions Satellite Imagery
NASA Technical Reports Server (NTRS)
Jasinski, Michael F.; Crago, Richard; Stewart, Pamela
1998-01-01
Parameterizations of the frontal area index and canopy area index of natural or randomly distributed plants are developed, and applied to the estimation of local aerodynamic roughness using satellite imagery. The formulas are expressed in terms of the subpixel fractional vegetation cover and one non-dimensional geometric parameter that characterizes the plant's shape. Geometrically similar plants and Poisson distributed plant centers are assumed. An appropriate averaging technique to extend satellite pixel-scale estimates to larger scales is provided. The parameterization is applied to the estimation of aerodynamic roughness using satellite imagery for a 2.3 sq km coniferous portion of the Landes Forest near Lubbon, France, during the 1986 HAPEX-Mobilhy Experiment. The canopy area index is estimated first for each pixel in the scene based on previous estimates of fractional cover obtained using Landsat Thematic Mapper imagery. Next, the results are incorporated into Raupach's (1992, 1994) analytical formulas for momentum roughness and zero-plane displacement height. The estimates compare reasonably well to reference values determined from measurements taken during the experiment and to published literature values. The approach offers the potential for estimating regionally variable, vegetation aerodynamic roughness lengths over natural regions using satellite imagery when there exists only limited knowledge of the vegetated surface.
On Parameterizing Turbulence in the Stably Stratified Atmospheric Boundary Layer
NASA Astrophysics Data System (ADS)
Wilson, Jordan M.; Venayagamoorthy, Subhas K.
2014-11-01
Parameterizing turbulent mixing in the stably stratified atmospheric boundary layer remains an active area of research connecting available field measurements with appropriate model parameters. The research presented studies the pertinent mixing lengths for shear- and buoyancy-dominated (or weakly stable and very stable) regimes in the stable atmospheric boundary layer (SABL). Incorporating shear and buoyancy effects, two length scales can be constructed, LkS =k 1 / 2 / S and LkN =k 1 / 2 / N , respectively. Extending the conceptual framework of Mater & Venayagamoorthy (2014), LkS and LkN are shown to be accurate representations of large-scale motions from which relevant model parameters are developed using observations from three field campaigns. An a priori analysis of large-eddy simulation (LES) data evaluates the efficacy of parameterizations applied to the vertical structure of the SABL. The results of this study provide a thorough evaluation of the pertinent mixing lengths in stably stratified turbulence through applications to atmospheric observations and numerical models for the boundary layer extendable to larger-scale weather prediction or global circulation models. S.K.V. gratefully acknowledges the support of the National Science Foundation under Grant No. OCE-1151838.
Transient Storage Parameterization of Wetland-dominated Stream Reaches
NASA Astrophysics Data System (ADS)
Wilderotter, S. M.; Lightbody, A.; Kalnejais, L. H.; Wollheim, W. M.
2014-12-01
Current understanding of the importance of transient storage in fluvial wetlands is limited. Wetlands that have higher connectivity to the main stream channel are important because they have the potential to retain more nitrogen within the river system than wetlands that receive little direct stream discharge. In this study, we investigated how stream water accesses adjacent fluvial wetlands in New England coastal watersheds to improve parameterization in network-scale models. Break through curves of Rhodamine WT were collected for eight wetlands in the Ipswich and Parker (MA) and Lamprey River (NH) watersheds, USA. The curves were inverse modeled using STAMMT-L to optimize the connectivity and size parameters for each reach. Two approaches were tested, a single dominant storage zone and a range of storage zones represented using a power-law distribution of storage zone connectivity. Multiple linear regression analyses were conducted to relate transient storage parameters to stream discharge, area, length-to-width ratio, and reach slope. Resulting regressions will enable more accurate parameterization of surface water transient storage in network-scale models.
The Reduced RUM as a Logit Model: Parameterization and Constraints.
Chiu, Chia-Yi; Köhn, Hans-Friedrich
2016-06-01
Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided.
High-precision positioning of radar scatterers
NASA Astrophysics Data System (ADS)
Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.
2016-05-01
Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.
A Coordinated Effort to Improve Parameterization of High-Latitude Cloud and Radiation Processes
J. O. Pinto, A.H. Lynch
2005-12-14
The goal of this project is the development and evaluation of improved parameterization of arctic cloud and radiation processes and implementation of the parameterizations into a climate model. Our research focuses specifically on the following issues: (1) continued development and evaluation of cloud microphysical parameterizations, focusing on issues of particular relevance for mixed phase clouds; and (2) evaluation of the mesoscale simulation of arctic cloud system life cycles.
Total Cross Section Parameterizations for Pion Production in Nucleon-Nucleon Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.
2008-01-01
Total cross section parameterizations for neutral and charged pion production in nucleon-nuelcon collisions are compared to an extensive set of experimental data over the projectile momentum range from threshold to 300 GeV. Both proton-proton and proton-neutron reactions are considered. Good agreement between parameterizations and experiment is found, and therefore the parameterizations will be useful for applications, such as transport codes.
Survey of background scattering from materials found in small-angle neutron scattering
Barker, J. G.; Mildner, D. F. R.
2015-01-01
Measurements and calculations of beam attenuation and background scattering for common materials placed in a neutron beam are presented over the temperature range of 300–700 K. Time-of-flight (TOF) measurements have also been made, to determine the fraction of the background that is either inelastic or quasi-elastic scattering as measured with a 3He detector. Other background sources considered include double Bragg diffraction from windows or samples, scattering from gases, and phonon scattering from solids. Background from the residual air in detector vacuum vessels and scattering from the 3He detector dome are presented. The thickness dependence of the multiple scattering correction for forward scattering from water is calculated. Inelastic phonon background scattering at small angles for crystalline solids is both modeled and compared with measurements. Methods of maximizing the signal-to-noise ratio by material selection, choice of sample thickness and wavelength, removal of inelastic background by TOF or Be filters, and removal of spin-flip scattering with polarized beam analysis are discussed. PMID:26306088
Scatter corrections for cone beam optical CT
NASA Astrophysics Data System (ADS)
Olding, Tim; Holmes, Oliver; Schreiner, L. John
2009-05-01
Cone beam optical computed tomography (OptCT) employing the VISTA scanner (Modus Medical, London, ON) has been shown to have significant promise for fast, three dimensional imaging of polymer gel dosimeters. One distinct challenge with this approach arises from the combination of the cone beam geometry, a diffuse light source, and the scattering polymer gel media, which all contribute scatter signal that perturbs the accuracy of the scanner. Beam stop array (BSA), beam pass array (BPA) and anti-scatter polarizer correction methodologies have been employed to remove scatter signal from OptCT data. These approaches are investigated through the use of well-characterized phantom scattering solutions and irradiated polymer gel dosimeters. BSA corrected scatter solutions show good agreement in attenuation coefficient with the optically absorbing dye solutions, with considerable reduction of scatter-induced cupping artifact at high scattering concentrations. The application of BSA scatter corrections to a polymer gel dosimeter lead to an overall improvement in the number of pixel satisfying the (3%, 3mm) gamma value criteria from 7.8% to 0.15%.
SU-E-T-597: Parameterization of the Photon Beam Dosimetry for a Commercial Linear Accelerator
Lebron, S; Lu, B; Yan, G; Kahler, D; Li, J; Barraclough, B; Liu, C
2015-06-15
Purpose: In radiation therapy, accurate data acquisition of photon beam dosimetric quantities is important for (1) beam modeling data input into a treatment planning system (TPS), (2) comparing measured and TPS modelled data, (3) a linear accelerator’s (linac) beam characteristics quality assurance process, and (4) establishing a standard data set for data comparison, etcetera. Parameterization of the photon beam dosimetry creates a portable data set that is easy to implement for different applications such as those previously mentioned. The aim of this study is to develop methods to parameterize photon percentage depth doses(PDD), profiles, and total scatter output factors(Scp). Methods: Scp, PDDs and profiles for different field sizes (from 2×2 to 40×40cm{sup 2}), depths and energies were measured in a linac using a three-dimensional water tank. All data were smoothed and profile data were also centered, symmetrized and geometrically scaled. The Scp and PDD data were analyzed using exponential functions. For modelling of open and wedge field profiles, each side was divided into three regions described by exponential, sigmoid and Gaussian equations. The model’s equations were chosen based on the physical principles described by these dosimetric quantities. The equations’ parameters were determined using a least square optimization method with the minimal amount of measured data necessary. The model’s accuracy was then evaluated via the calculation of absolute differences and distance–to–agreement analysis in low gradient and high gradient regions, respectively. Results: All differences in the PDDs’ buildup and the profiles’ penumbra regions were less than 2 mm and 0.5 mm, respectively. Differences in the low gradient regions were 0.20 ± 0.20% and 0.50 ± 0.35% for PDDs and profiles, respectively. For Scp data, all differences were less than 0.5%. Conclusion: This novel analytical model with minimum measurement requirements proved to accurately
FSP (Full Space Parameterization), Version 2.0
Fries, G.A.; Hacker, C.J.; Pin, F.G.
1995-10-01
This paper describes the modifications made to FSPv1.0 for the Full Space Parameterization (FSP) method, a new analytical method used to resolve underspecified systems of algebraic equations. The optimized code recursively searches for the necessary number of linearly independent vectors that are necessary to form the solution space. While doing this, it ensures that all possible combinations of solutions are checked, if needed, and handles complications which arise due to particular cases. In addition, two particular cases which cause failure of the FSP algorithm were discovered during testing of this new code. These cases are described in the context of how they are recognized and how they are handled by the new code. Finally, testing was performed on the new code using both isolated movements and complex trajectories for various mobile manipulators.
Parameterization of Aerosol Sinks in Chemical Transport Models
NASA Technical Reports Server (NTRS)
Colarco, Peter
2012-01-01
The modelers point of view is that the aerosol problem is one of sources, evolution, and sinks. Relative to evolution and sink processes, enormous attention is given to the problem of aerosols sources, whether inventory based (e.g., fossil fuel emissions) or dynamic (e.g., dust, sea salt, biomass burning). On the other hand, aerosol losses in models are a major factor in controlling the aerosol distribution and lifetime. Here we shine some light on how aerosol sinks are treated in modern chemical transport models. We discuss the mechanisms of dry and wet loss processes and the parameterizations for those processes in a single model (GEOS-5). We survey the literature of other modeling studies. We additionally compare the budgets of aerosol losses in several of the ICAP models.
A simple parameterization of aerosol emissions in RAMS
NASA Astrophysics Data System (ADS)
Letcher, Theodore
Throughout the past decade, a high degree of attention has been focused on determining the microphysical impact of anthropogenically enhanced concentrations of Cloud Condensation Nuclei (CCN) on orographic snowfall in the mountains of the western United States. This area has garnered a lot of attention due to the implications this effect may have on local water resource distribution within the Region. Recent advances in computing power and the development of highly advanced microphysical schemes within numerical models have provided an estimation of the sensitivity that orographic snowfall has to changes in atmospheric CCN concentrations. However, what is still lacking is a coupling between these advanced microphysical schemes and a real-world representation of CCN sources. Previously, an attempt to representation the heterogeneous evolution of aerosol was made by coupling three-dimensional aerosol output from the WRF Chemistry model to the Colorado State University (CSU) Regional Atmospheric Modeling System (RAMS) (Ward et al. 2011). The biggest problem associated with this scheme was the computational expense. In fact, the computational expense associated with this scheme was so high, that it was prohibitive for simulations with fine enough resolution to accurately represent microphysical processes. To improve upon this method, a new parameterization for aerosol emission was developed in such a way that it was fully contained within RAMS. Several assumptions went into generating a computationally efficient aerosol emissions parameterization in RAMS. The most notable assumption was the decision to neglect the chemical processes in formed in the formation of Secondary Aerosol (SA), and instead treat SA as primary aerosol via short-term WRF-CHEM simulations. While, SA makes up a substantial portion of the total aerosol burden (much of which is made up of organic material), the representation of this process is highly complex and highly expensive within a numerical
Numerical-parameterized relativistic optimized effective potential for atoms
NASA Astrophysics Data System (ADS)
Buendía, E.; Gálvez, F. J.; Maldonado, P.; Sarsa, A.
2007-08-01
A numerical-parameterized solution of the relativistic optimized effective potential equations for atoms is proposed. The analytic continuation method is used to solve the single-particle Dirac equation. This method provides an accurate solution and allows for a straightforward use of the logarithmic transformation. The equations are solved within both a single and a multi-configurational framework. The single-configuration results for the ground state of the noble gases from Ne to Rn are compared with those obtained from a fully numerical solution of the relativistic optimized effective potential equations as well as with the Dirac-Hartree-Fock results. The performance of the multi-configuration version of the method is illustrated by studying a number of excited states of the carbon and iron atoms.
Parameterized Facial Expression Synthesis Based on MPEG-4
NASA Astrophysics Data System (ADS)
Raouzaiou, Amaryllis; Tsapatsoulis, Nicolas; Karpouzis, Kostas; Kollias, Stefanos
2002-12-01
In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999). In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978)). To achieve this goal, we utilize facial animation parameters (FAPs) to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.
Criteria and algorithms for spectrum parameterization of MST radar signals
NASA Technical Reports Server (NTRS)
Rastogi, P. K.
1984-01-01
The power spectra S(f) of MST radar signals contain useful information about the variance of refractivity fluctuations, the mean radial velocity, and the radial velocity variance in the atmosphere. When noise and other contaminating signals are absent, these quantities can be obtained directly from the zeroth, first and second order moments of the spectra. A step-by-step procedure is outlined that can be used effectively to reduce large amounts of MST radar data-averaged periodograms measured in range and time to a parameterized form. The parameters to which a periodogram can be reduced are outlined and the steps in the procedure, that may be followed selectively, to arrive at the final set of reduced parameters are given. Examples of the performance of the procedure are given and its use with other radars are commented on.
Use of Cloud Computing to Calibrate a Highly Parameterized Model
NASA Astrophysics Data System (ADS)
Hayley, K. H.; Schumacher, J.; MacMillan, G.; Boutin, L.
2012-12-01
We present a case study using cloud computing to facilitate the calibration of a complex and highly parameterized model of regional groundwater flow. The calibration dataset consisted of many (~1500) measurements or estimates of static hydraulic head, a high resolution time series of groundwater extraction and disposal rates at 42 locations and pressure monitoring at 147 locations with a total of more than one million raw measurements collected over a ten year pumping history, and base flow estimates at 5 surface water monitoring locations. This modeling project was undertaken to assess the sustainability of groundwater withdrawal and disposal plans for insitu heavy oil extraction in Northeast Alberta, Canada. The geological interpretations used for model construction were based on more than 5,000 wireline logs collected throughout the 30,865 km2 regional study area (RSA), and resulted in a model with 28 slices, and 28 hydro stratigraphic units (average model thickness of 700 m, with aquifers ranging from a depth of 50 to 500 m below ground surface). The finite element FEFLOW model constructed on this geological interpretation had 331,408 nodes and required 265 time steps to simulate the ten year transient calibration period. This numerical model of groundwater flow required 3 hours to run on a on a server with two, 2.8 GHz processers and 16 Gb. RAM. Calibration was completed using PEST. Horizontal and vertical hydraulic conductivity as well as specific storage for each unit were independent parameters. For the recharge and the horizontal hydraulic conductivity in the three aquifers with the most transient groundwater use, a pilot point parameterization was adopted. A 7*7 grid of pilot points was defined over the RSA that defined a spatially variable horizontal hydraulic conductivity or recharge field. A 7*7 grid of multiplier pilot points that perturbed the more regional field was then superimposed over the 3,600 km2 local study area (LSA). The pilot point
Daily evaporation from drying soil: Universal parameterization with similarity
NASA Astrophysics Data System (ADS)
Brutsaert, Wilfried
2014-04-01
With supporting experimental evidence from three separate field studies of daily mean evaporation from bare soil with vastly different physical characteristics, it is shown that the process can be described as isothermal linear diffusion in a finite depth domain. The resulting solution leads directly to similarity variables and thus a universal parameterization, which should in principle be applicable to most field soils. In addition, a closed form expression is presented to estimate the weighted mean diffusivity for exponential type soil water diffusivities. In this solution, the widely used square root of inverse time proportionality of this phenomenon is its short time version, whereas the exponential decay proportionality, proposed however by several authors for vegetated surfaces, is its long time version. It appears that in many situations the soil layer contributing to evaporation is fairly shallow and only a few tens of centimeters thick.
Parameterization of ion channeling half-angles and minimum yields
NASA Astrophysics Data System (ADS)
Doyle, Barney L.
2016-03-01
A MS Excel program has been written that calculates ion channeling half-angles and minimum yields in cubic bcc, fcc and diamond lattice crystals. All of the tables and graphs in the three Ion Beam Analysis Handbooks that previously had to be manually looked up and read from were programed into Excel in handy lookup tables, or parameterized, for the case of the graphs, using rather simple exponential functions with different power functions of the arguments. The program then offers an extremely convenient way to calculate axial and planar half-angles, minimum yields, effects on half-angles and minimum yields of amorphous overlayers. The program can calculate these half-angles and minimum yields for axes and [h k l] planes up to (5 5 5). The program is open source and available at
Parameterized modeling and estimation of spatially varying optical blur
NASA Astrophysics Data System (ADS)
Simpkins, Jonathan D.; Stevenson, Robert L.
2015-02-01
Optical blur can display significant spatial variation across the image plane, even for constant camera settings and object depth. Existing solutions to represent this spatially varying blur requires a dense sampling of blur kernels across the image, where each kernel is defined independent of the neighboring kernels. This approach requires a large amount of data collection, and the estimation of the kernels is not as robust as if it were possible to incorporate knowledge of the relationship between adjacent kernels. A novel parameterized model is presented which relates the blur kernels at different locations across the image plane. The model is motivated by well-established optical models, including the Seidel aberration model. It is demonstrated that the proposed model can unify a set of hundreds of blur kernel observations across the image plane under a single 10-parameter model, and the accuracy of the model is demonstrated with simulations and measurement data collected by two separate research groups.
A stratiform cloud parameterization for General Circulation Models
Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.
1994-05-01
The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species.
Shi, Xiangjun; Liu, Xiaohong; Zhang, Kai
2015-01-01
In order to improve the treatment of ice nucleation in a more realistic manner in the Community Atmospheric Model version 5.3 (CAM5.3), the effects of preexisting ice crystals on ice nucleation in cirrus clouds are considered. In addition, by considering the in-cloud variability in ice saturation ratio, homogeneous nucleation takes place spatially only in a portion of cirrus cloud rather than in the whole area of cirrus cloud. With these improvements, the two unphysical limiters used in the representation of ice nucleation are removed. Compared to observations, the ice number concentrations and the probability distributions of ice number concentration are both improved with the updated treatment. The preexisting ice crystals significantly reduce ice number concentrations in cirrus clouds, especially at mid- to high latitudes in the upper troposphere (by a factor of ~10). Furthermore, the contribution of heterogeneous ice nucleation to cirrus ice crystal number increases considerably.Besides the default ice nucleation parameterization of Liu and Penner (2005, hereafter LP) in CAM5.3, two other ice nucleation parameterizations of Barahona and Nenes (2009, hereafter BN) and Kärcher et al. (2006, hereafter KL) are implemented in CAM5.3 for the comparison. In-cloud ice crystal number concentration, percentage contribution from heterogeneous ice nucleation to total ice crystal number, and preexisting ice effects simulated by the three ice nucleation parameterizations have similar patterns in the simulations with present-day aerosol emissions. However, the change (present-day minus pre-industrial times) in global annual mean column ice number concentration from the KL parameterization (3.24×106 m-2) is obviously less than that from the LP (8.46×106 m-2) and BN (5.62×106 m-2) parameterizations. As a result, experiment using the KL parameterization predicts a much smaller anthropogenic aerosol longwave indirect forcing (0.24 W m-2) than that using the LP (0.46 W m-2
Parameterization of Infrared Absorption in Midlatitude Cirrus Clouds
Sassen, Kenneth; Wang, Zhien; Platt, C.M.R.; Comstock, Jennifer M.
2003-01-01
Employing a new approach based on combined Raman lidar and millimeter-wave radar measurements and a parameterization of the infrared absorption coefficient {sigma}{sub a}(km{sup -1}) in terms of retrieved cloud microphysics, we derive a statistical relation between {sigma}{sub a} and cirrus cloud temperature. The relations {sigma}{sub a} = 0.3949 + 5.3886 x 10{sup -3} T + 1.526 x 10{sup -5} T{sup 2} for ambient temperature (T,{sup o}C), and {sigma}{sub a} = 0.2896 + 3.409 x 10{sup -3} T{sub m} for midcloud temperature (T{sub m}, {sup o}C), are found using a second order polynomial fit. Comparison with two {sigma}{sub a} versus T{sub m} relations obtained primarily from midlatitude cirrus using the combined lidar/infrared radiometer (LIRAD) approach reveals significant differences. However, we show that this reflects both the previous convention used in curve fitting (i. e., {sigma}{sub a} {yields} 0 at {approx} 80 C), and the types of clouds included in the datasets. Without such constraints, convergence is found in the three independent remote sensing datasets within the range of conditions considered valid for cirrus (i.e., cloud optical depth {approx} 3.0 and T{sub m} < {approx}20 C). Hence for completeness we also provide reanalyzed parameterizations for a visible extinction coefficient {sigma}{sub a} versus T{sub m} relation for midlatitude cirrus, and a data sample involving cirrus that evolved into midlevel altostratus clouds with higher optical depths.
Modeling Jupiter's Quasi Quadrennial Oscillation (QQO) with Wave Drag Parameterizations
NASA Astrophysics Data System (ADS)
Cosentino, Rick; Morales-Juberias, Raul; Greathouse, Thomas K.; Orton, Glenn S.
2016-10-01
The QQO in Jupiter's atmosphere was first discovered after 7.8 micron infrared observations spanning the 1980's and 1990's detected a temperature oscillation near 10 hPa (Orton et al. 1991, Science 252, 537, Leovy et. al. 1991, Nature 354, 380, Friedson 1999, Icarus 137, 34). New observations using the Texas Echelon cross-dispersed Echelle Spectrograph (TEXES), mounted on the NASA Infrared Telescope facility (IRTF), have been used to characterize a complete cycle of the QQO between January 2012 and January 2016 (Greathouse et al. 2016, DPS) . These new observations not only show the thermal oscillation at 10 hPa, but they also show that the QQO extends upwards in Jupiter's atmosphere to pressures as high as 0.4 hPa. We incorporated three different wave-drag parameterizations into the EPIC General Circulation Model (Dowling et al. 1998, Icarus 132, 221) to simulate the observed Jovian QQO temperature signatures as a function of latitude, pressure and time using results from the TEXES datasets as new constraints. Each parameterization produces unique results and offers insight into the spectra of waves that likely exist in Jupiter's atmosphere to force the QQO. High-frequency gravity waves produced from convection are extremely difficult to directly observe but likely contribute a significant portion to the QQO momentum budget. We use different models to simulate the effects of waves such as these, to indirectly explore their spectrum in Jupiter's atmosphere by varying their properties. The model temperature outputs show strong correlations to equatorial and mid-latitude temperature fields retrieved from the TEXES datasets at different epochs. Our results suggest the QQO phenomenon could be more than one alternating zonal jet that descends over time in response to Jovian atmospheric forcing (e.g. gravity waves from convection).Research funding provided by the NRAO Grote Reber Pre-Doctoral Fellowship. Computing resources include the NMT PELICAN cluster and the CISL
Mesoscale Eddy Parameterization in an Idealized Primitive Equations Model
NASA Astrophysics Data System (ADS)
Anstey, J.; Zanna, L.
2014-12-01
Large-scale ocean currents such as the Gulf Stream and Kuroshio Extension are strongly influenced by mesoscale eddies, which have spatial scales of order 10-100 km. The effects of these eddies are poorly represented in many state-of-the-art ocean general circulation models (GCMs) due to the inadequate spatial resolution of these models. In this study we examine the response of the large-scale ocean circulation to the rectified effects of eddy forcing - i.e., the role played by surface-intensified mesoscale eddies in sustaining and modulating an eastward jet that separates from an intense western boundary current (WBC). For this purpose a primitive equations ocean model (the MITgcm) in an idealized wind-forced double-gyre configuration is integrated at eddy-resolving resolution to reach a forced-dissipative equilibrium state that captures the essential dynamics of WBC-extension jets. The rectified eddy forcing is diagnosed as a stochastic function of the large-scale state, this being characterized by the manner in which potential vorticity (PV) contours become deformed. Specifically, a stochastic function based on the Laplacian of the material rate of change of PV is examined in order to compare the primitive equations results with those of a quasi-geostrophic model in which this function has shown some utility as a parameterization of eddy effects (Porta Mana and Zanna, 2014). The key question is whether an eddy parameterization based on quasi-geostrophic scaling is able to carry over to a system in which this scaling is not imposed (i.e. the primitive equations), in which unbalanced motions occur.
Ameriflux data used for verification of surface layer parameterizations
NASA Astrophysics Data System (ADS)
Tassone, Caterina; Ek, Mike
2015-04-01
The atmospheric surface-layer parameterization is an important component in a coupled model, as its output, the surface exchange coefficients for momentum, heat and humidity, are used to determine the fluxes of these quantities between the land-surface and the atmosphere. An accurate prediction of these fluxes is therefore required in order to provide a correct forecast of the surface temperature, humidity and ultimately also the precipitation in a model. At the NOAA/NCEP Environmental Modeling Center, a one-dimensional Surface Layer Simulator (SLS) has been developed for simulating the surface layer and its interface. Two different configurations of the SLS exist, replicating in essence the way in which the surface layer is simulated in the GFS and the NAM, respectively. Input data for the SLS are the basic atmospheric quantities of winds, temperature, humidity and pressure evaluated at a specific height above the ground, surface values of temperature and humidity, and the momentum roughness length z0. The output values of the SLS are the surface exchange coefficients for heat and momentum. The exchange coefficients computed by the SLS are then compared with independent estimates derived from measured surface heat fluxes. The SLS is driven by a set of Ameriflux data acquired at 22 stations over a period of several years. This provides a large number of different vegetation characteristics and helps ensure statistical significance. Even though there are differences in the respective surface layer formulations between the GFS and the NAM, they are both based on similarity theory, and therefore lower boundary conditions, i.e. roughness lengths for momentum and heat, and profile functions are among the main components of the surface layer that need to be evaluated. The SLS is a very powerful tool for this type of evaluation. We present the results of the Ameriflux comparison and discuss the implications of our results for the surface layer parameterizations of the NAM
Systematic Parameterization of Monovalent Ions Employing the Nonbonded Model.
Li, Pengfei; Song, Lin Frank; Merz, Kenneth M
2015-04-14
Monovalent ions play fundamental roles in many biological processes in organisms. Modeling these ions in molecular simulations continues to be a challenging problem. The 12-6 Lennard-Jones (LJ) nonbonded model is widely used to model monovalent ions in classical molecular dynamics simulations. A lot of parameterization efforts have been reported for these ions with a number of experimental end points. However, some reported parameter sets do not have a good balance between the two Lennard-Jones parameters (the van der Waals (VDW) radius and potential well depth), which affects their transferability. In the present work, via the use of a noble gas curve we fitted in former work (J. Chem. Theory Comput. 2013, 9, 2733), we reoptimized the 12-6 LJ parameters for 15 monovalent ions (11 positive and 4 negative ions) for three extensively used water models (TIP3P, SPC/E, and TIP4P(EW)). Since the 12-6 LJ nonbonded model performs poorly in some instances for these ions, we have also parameterized the 12-6-4 LJ-type nonbonded model (J. Chem. Theory Comput. 2014, 10, 289) using the same three water models. The three derived parameter sets focused on reproducing the hydration free energies (the HFE set) and the ion-oxygen distance (the IOD set) using the 12-6 LJ nonbonded model and the 12-6-4 LJ-type nonbonded model (the 12-6-4 set) overall give improved results. In particular, the final parameter sets showed better agreement with quantum mechanically calculated VDW radii and improved transferability to ion-pair solutions when compared to previous parameter sets. PMID:26574374
A parameterization of the ice-ocean drag coefficient
NASA Astrophysics Data System (ADS)
Lu, Peng; Li, Zhijun; Cheng, Bin; LeppäRanta, Matti
2011-07-01
A parameterization of the ice-ocean drag coefficient (Cw) was developed through partitioning the oceanic drag force into three components: (1) form drag on the floe edge, (2) form drag on the ridge keel, and (3) skin friction on the ice bottom. Through these quantities, Cw was expressed as a function of observable sea ice geometric parameters. Sensitivity studies were carried out to investigate the influence of varying sea ice conditions on Cw. The results revealed that Cw increases first and then decreases with increasing ice concentration (A), similar to the observations of the air-ice drag coefficient, and which is mainly attributed to the nonmonotonic variation of the form drag on the floe edge with ice concentration. Moreover, the form drag on the floe edge is always the dominant component, having a proportion of more than 60% in sea ice with a large aspect ratio (draft/length ≥ 1/100), indicating the necessity of including this term in sea ice dynamic models, particularly for the marginal ice zone (MIZ). The form drag on the ridge keel becomes dominant only when the ridging intensity is extremely high (depth/spacing ≥ 1/20). Additionally, a large value of Cw cannot be caused only by the inclusion of form drag terms but also by large skin friction over rough ice bottoms. Finally, for typical situations in the MIZ with moderate ridging intensity, the parameterization will underestimate Cw by approximately 30% for a rough ice bottom and by over 80% for a smooth ice bottom if no form drags are considered.
Sensitivity of Antarctic sea ice to form drag parameterization
NASA Astrophysics Data System (ADS)
Barbic, Gaia; Tsamados, Michel; Petty, Alek; Schroeder, David; Holland, Paul; Feltham, Daniel
2014-05-01
A new drag parametrization accounting explicitly for form drag has been recently formulated and applied to the Arctic sea ice (Lupkes et al, 2012 and Tsamados et al, 2014). We summarize here the fundamental elements of this formulation and we then adapt it to the Antarctic sea ice. Considering the general expression of the momentum balance of sea ice, we analyze the total (neutral) drag coefficients by studying separately air-ice and ocean-ice momentum fluxes, and by introducing the parameterization for both the atmospheric neutral drag coeffcient (ANDC) and the oceanic neutral drag coeffcient (ONDC). The two coefficients are calculated as a sum of their skin frictional contribution and form drag contribution, which comes from ridges and floe edges for the ANDC and keels and floe edges for the ONDC. Due to the contrasting geography of the two polar regions, there are important differences, both dynamic and thermodynamic, between Arctic and Antarctic sea ice. In the Antarctic, sea ice is younger, less ridged (hence thinner and smoother). Due to the intense snowfalls, the snow cover is generally thicker than in the Arctic, with values that vary significantly both seasonally and regionally and can affect the roughness of the surface and can lead to flooding of the ice. At the outer boundary of the Southern Ocean, the ice is unconstrained by land, divergent and subject to meridional advection, which leads to a much faster ice drift than in the Arctic. We show here how the new parameterization accounting for form drag influences the Antarctic sea ice characteristics.
Precisely parameterized experimental and computational models of tissue organization.
Molitoris, Jared M; Paliwal, Saurabh; Sekar, Rajesh B; Blake, Robert; Park, JinSeok; Trayanova, Natalia A; Tung, Leslie; Levchenko, Andre
2016-02-01
Patterns of cellular organization in diverse tissues frequently display a complex geometry and topology tightly related to the tissue function. Progressive disorganization of tissue morphology can lead to pathologic remodeling, necessitating the development of experimental and theoretical methods of analysis of the tolerance of normal tissue function to structural alterations. A systematic way to investigate the relationship of diverse cell organization to tissue function is to engineer two-dimensional cell monolayers replicating key aspects of the in vivo tissue architecture. However, it is still not clear how this can be accomplished on a tissue level scale in a parameterized fashion, allowing for a mathematically precise definition of the model tissue organization and properties down to a cellular scale with a parameter dependent gradual change in model tissue organization. Here, we describe and use a method of designing precisely parameterized, geometrically complex patterns that are then used to control cell alignment and communication of model tissues. We demonstrate direct application of this method to guiding the growth of cardiac cell cultures and developing mathematical models of cell function that correspond to the underlying experimental patterns. Several anisotropic patterned cultures spanning a broad range of multicellular organization, mimicking the cardiac tissue organization of different regions of the heart, were found to be similar to each other and to isotropic cell monolayers in terms of local cell-cell interactions, reflected in similar confluency, morphology and connexin-43 expression. However, in agreement with the model predictions, different anisotropic patterns of cell organization, paralleling in vivo alterations of cardiac tissue morphology, resulted in variable and novel functional responses with important implications for the initiation and maintenance of cardiac arrhythmias. We conclude that variations of tissue geometry and topology
Synthesizing 3D Surfaces from Parameterized Strip Charts
NASA Technical Reports Server (NTRS)
Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri
2004-01-01
We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.
Nejadgholi, Isar; Caytak, Herschel; Bolic, Miodrag; Batkin, Izmail; Shirmohammadi, Shervin
2015-05-01
In several applications of bioimpedance spectroscopy, the measured spectrum is parameterized by being fitted into the Cole equation. However, the extracted Cole parameters seem to be inconsistent from one measurement session to another, which leads to a high standard deviation of extracted parameters. This inconsistency is modeled with a source of random variations added to the voltage measurement carried out in the time domain. These random variations may originate from biological variations that are irrelevant to the evidence that we are investigating. Yet, they affect the voltage measured by using a bioimpedance device based on which magnitude and phase of impedance are calculated.By means of simulated data, we showed that Cole parameters are highly affected by this type of variation. We further showed that singular value decomposition (SVD) is an effective tool for parameterizing bioimpedance measurements, which results in more consistent parameters than Cole parameters. We propose to apply SVD as a preprocessing method to reconstruct denoised bioimpedance measurements. In order to evaluate the method, we calculated the relative difference between parameters extracted from noisy and clean simulated bioimpedance spectra. Both mean and standard deviation of this relative difference are shown to effectively decrease when Cole parameters are extracted from preprocessed data in comparison to being extracted from raw measurements.We evaluated the performance of the proposed method in distinguishing three arm positions, for a set of experiments including eight subjects. It is shown that Cole parameters of different positions are not distinguishable when extracted from raw measurements. However, one arm position can be distinguished based on SVD scores. Moreover, all three positions are shown to be distinguished by two parameters, R0/R∞ and Fc, when Cole parameters are extracted from preprocessed measurements. These results suggest that SVD could be considered as an
Parameterization of tree-ring growth in Siberia
NASA Astrophysics Data System (ADS)
Tychkov, Ivan; Popkova, Margarita; Shishov, Vladimir; Vaganov, Eugene
2016-04-01
No doubt, climate-tree growth relationship is an one of the useful and interesting subject of studying in dendrochronology. It provides an information of tree growth dependency on climatic environment, but also, gives information about growth conditions and whole tree-ring growth process for long-term periods. New parameterization approach of the Vaganov-Shashkin process-based model (VS-model) is developed to described critical process linking climate variables with tree-ring formation. The approach (co-called VS-Oscilloscope) is presented as a computer software with graphical interface. As most process-based tree-ring models, VS-model's initial purpose is to describe variability of tree-ring radial growth due to variability of climatic factors, but also to determinate principal factors limiting tree-ring growth. The principal factors affecting on the growth rate of cambial cells in the VS-model are temperature, day light and soil moisture. Detailed testing of VS-Oscilloscope was done for semi-arid area of southern Siberia (Khakassian region). Significant correlations between initial tree-ring chronologies and simulated tree-ring growth curves were obtained. Direct natural observations confirm obtained simulation results including unique growth characteristic for semi-arid habitats. New results concerning formation of wide and narrow rings under different climate conditions are considered. By itself the new parameterization approach (VS-oscilloscope) is an useful instrument for better understanding of various processes in tree-ring formation. The work was supported by the Russian Science Foundation (RSF # 14-14-00219).
Research on aerosol profiles and parameterization scheme in Southeast China
NASA Astrophysics Data System (ADS)
Wang, Gang; Deng, Tao; Tan, Haobo; Liu, Xiantong; Yang, Honglong
2016-09-01
The vertical distribution of the aerosol extinction coefficient serves as a basis for evaluating aerosol radiative forcing and air quality modeling. In this study, MODIS AOD data and ground-based lidar extinction coefficients were employed to verify 6 years (2009-2014) aerosol extinction data obtained via CALIOP for Southeast China. The objective was mainly to provide the parameterization scheme of annual and seasonal aerosol extinction profiles. The results showed that the horizontal and vertical distributions of CALIOP extinction data were highly accurate in Southeast China. The annual average AOD below 2 km accounted for 64% of the total layer, with larger proportions observed in winter (80%) and autumn (80%) and lower proportions observed in summer (70%) and spring (59%). The AOD was maximum in the spring (0.58), followed by the autumn and winter (0.44), and reached a minimum in the summer (0.40). The near-surface extinction coefficient increased from summer, spring, autumn and winter, in that order. The Elterman profile is obviously lower than the profiles observed by CALIOP in Southeast China. The annual average and seasonal aerosol profiles showed an exponential distribution, and could be divided into two sections. Two sections exponential fitting was used in the parameterization scheme. In the first section, the aerosol scale height reached 2200 m with a maximum (3,500 m) in summer and a minimum (1,230 m) in winter, which meant that the aerosol extinction decrease with height slower in summer, but more rapidly in winter. In second section, the aerosol scale height was maximum in spring, which meant that the higher aerosol diffused in spring.
Parameterization of wind turbine impacts on hydrodynamics and sediment transport
NASA Astrophysics Data System (ADS)
Rivier, Aurélie; Bennis, Anne-Claire; Pinon, Grégory; Magar, Vanesa; Gross, Markus
2016-10-01
Monopile foundations of offshore wind turbines modify the hydrodynamics and sediment transport at local and regional scales. The aim of this work is to assess these modifications and to parameterize them in a regional model. In the present study, this is achieved through a regional circulation model, coupled with a sediment transport module, using two approaches. One approach is to explicitly model the monopiles in the mesh as dry cells, and the other is to parameterize them by adding a drag force term to the momentum and turbulence equations. Idealised cases are run using hydrodynamical conditions and sediment grain sizes typical from the area located off Courseulles-sur-Mer (Normandy, France), where an offshore windfarm is under planning, to assess the capacity of the model to reproduce the effect of the monopile on the environment. Then, the model is applied to a real configuration on an area including the future offshore windfarm of Courseulles-sur-Mer. Four monopiles are represented in the model using both approaches, and modifications of the hydrodynamics and sediment transport are assessed over a tidal cycle. In relation to local hydrodynamic effects, it is observed that currents increase at the side of the monopile and decrease in front of and downstream of the monopile. In relation to sediment transport effect, the results show that resuspension and erosion occur around the monopile in locations where the current speed increases due to the monopile presence, and sediments deposit downstream where the bed shear stress is lower. During the tidal cycle, wakes downstream of the monopile reach the following monopile and modify the velocity magnitude and suspended sediment concentration patterns around the second monopile.
Parameterization of wind turbine impacts on hydrodynamics and sediment transport
NASA Astrophysics Data System (ADS)
Rivier, Aurélie; Bennis, Anne-Claire; Pinon, Grégory; Magar, Vanesa; Gross, Markus
2016-09-01
Monopile foundations of offshore wind turbines modify the hydrodynamics and sediment transport at local and regional scales. The aim of this work is to assess these modifications and to parameterize them in a regional model. In the present study, this is achieved through a regional circulation model, coupled with a sediment transport module, using two approaches. One approach is to explicitly model the monopiles in the mesh as dry cells, and the other is to parameterize them by adding a drag force term to the momentum and turbulence equations. Idealised cases are run using hydrodynamical conditions and sediment grain sizes typical from the area located off Courseulles-sur-Mer (Normandy, France), where an offshore windfarm is under planning, to assess the capacity of the model to reproduce the effect of the monopile on the environment. Then, the model is applied to a real configuration on an area including the future offshore windfarm of Courseulles-sur-Mer. Four monopiles are represented in the model using both approaches, and modifications of the hydrodynamics and sediment transport are assessed over a tidal cycle. In relation to local hydrodynamic effects, it is observed that currents increase at the side of the monopile and decrease in front of and downstream of the monopile. In relation to sediment transport effect, the results show that resuspension and erosion occur around the monopile in locations where the current speed increases due to the monopile presence, and sediments deposit downstream where the bed shear stress is lower. During the tidal cycle, wakes downstream of the monopile reach the following monopile and modify the velocity magnitude and suspended sediment concentration patterns around the second monopile.
Precisely parameterized experimental and computational models of tissue organization†
Sekar, Rajesh B.; Blake, Robert; Park, JinSeok; Trayanova, Natalia A.; Tung, Leslie; Levchenko, Andre
2016-01-01
Patterns of cellular organization in diverse tissues frequently display a complex geometry and topology tightly related to the tissue function. Progressive disorganization of tissue morphology can lead to pathologic remodeling, necessitating the development of experimental and theoretical methods of analysis of the tolerance of normal tissue function to structural alterations. A systematic way to investigate the relationship of diverse cell organization to tissue function is to engineer two-dimensional cell monolayers replicating key aspects of the in vivo tissue architecture. However, it is still not clear how this can be accomplished on a tissue level scale in a parameterized fashion, allowing for a mathematically precise definition of the model tissue organization and properties down to a cellular scale with a parameter dependent gradual change in model tissue organization. Here, we describe and use a method of designing precisely parameterized, geometrically complex patterns that are then used to control cell alignment and communication of model tissues. We demonstrate direct application of this method to guiding the growth of cardiac cell cultures and developing mathematical models of cell function that correspond to the underlying experimental patterns. Several anisotropic patterned cultures spanning a broad range of multicellular organization, mimicking the cardiac tissue organization of different regions of the heart, were found to be similar to each other and to isotropic cell monolayers in terms of local cell–cell interactions, reflected in similar confluency, morphology and connexin-43 expression. However, in agreement with the model predictions, different anisotropic patterns of cell organization, paralleling in vivo alterations of cardiac tissue morphology, resulted in variable and novel functional responses with important implications for the initiation and maintenance of cardiac arrhythmias. We conclude that variations of tissue geometry and
NASA Astrophysics Data System (ADS)
Scarpa, Riccardo; Thiene, Mara; Hensher, David A.
2012-01-01
Preferences for attributes of complex goods may differ substantially among members of households. Some of these goods, such as tap water, are jointly supplied at the household level. This issue of jointness poses a series of theoretical and empirical challenges to economists engaged in empirical nonmarket valuation studies. While a series of results have already been obtained in the literature, the issue of how to empirically measure these differences, and how sensitive the results are to choice of model specification from the same data, is yet to be clearly understood. In this paper we use data from a widely employed form of stated preference survey for multiattribute goods, namely choice experiments. The salient feature of the data collection is that the same choice experiment was applied to both partners of established couples. The analysis focuses on models that simultaneously handle scale as well as preference heterogeneity in marginal rates of substitution (MRS), thereby isolating true differences between members of couples in their MRS, by removing interpersonal variation in scale. The models employed are different parameterizations of the mixed logit model, including the willingness to pay (WTP)-space model and the generalized multinomial logit model. We find that in this sample there is some evidence of significant statistical differences in values between women and men, but these are of small magnitude and only apply to a few attributes.
Xia, Xiangao
2015-01-01
Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310
NASA Astrophysics Data System (ADS)
Brown, Steven S.; Dubé, William P.; Fuchs, Hendrik; Ryerson, Thomas B.; Wollny, Adam G.; Brock, Charles A.; Bahreini, Roya; Middlebrook, Ann M.; Neuman, J. Andrew; Atlas, Elliot; Roberts, James M.; Osthoff, Hans D.; Trainer, Michael; Fehsenfeld, Frederick C.; Ravishankara, A. R.
2009-04-01
This paper presents determinations of reactive uptake coefficients for N2O5, γ(N2O5), on aerosols from nighttime aircraft measurements of ozone, nitrogen oxides, and aerosol surface area on the NOAA P-3 during Second Texas Air Quality Study (TexAQS II). Determinations based on both the steady state approximation for NO3 and N2O5 and a plume modeling approach yielded γ(N2O5) substantially smaller than current parameterizations used for atmospheric modeling and generally in the range 0.5-6 × 10-3. Dependence of γ(N2O5) on variables such as relative humidity and aerosol composition was not apparent in the determinations, although there was considerable scatter in the data. Determinations were also inconsistent with current parameterizations of the rate coefficient for homogenous hydrolysis of N2O5 by water vapor, which may be as much as a factor of 10 too large. Nocturnal halogen activation via conversion of N2O5 to ClNO2 on chloride aerosol was not determinable from these data, although limits based on laboratory parameterizations and maximum nonrefractory aerosol chloride content showed that this chemistry could have been comparable to direct production of HNO3 in some cases.
NASA Astrophysics Data System (ADS)
Xia, Xiangao
2015-09-01
Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m-2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm-2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available.
Impact of Apex Model parameterization strategy on estimated benefit of conservation practices
Technology Transfer Automated Retrieval System (TEKTRAN)
Three parameterized Agriculture Policy Environmental eXtender (APEX) models for corn-soybean rotation on clay pan soils were developed with the objectives, 1. Evaluate model performance of three parameterization strategies on a validation watershed; and 2. Compare predictions of water quality benefi...
A shallow convection parameterization for the non-hydrostatic MM5 mesoscale model
Seaman, N.L.; Kain, J.S.; Deng, A.
1996-04-01
A shallow convection parameterization suitable for the Pennsylvannia State University (PSU)/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) is being developed at PSU. The parameterization is based on parcel perturbation theory developed in conjunction with a 1-D Mellor Yamada 1.5-order planetary boundary layer scheme and the Kain-Fritsch deep convection model.
Improved parameterization for the vertical flux of dust aerosols emitted by an eroding soil
Technology Transfer Automated Retrieval System (TEKTRAN)
The representation of the dust cycle in atmospheric circulation models hinges on an accurate parameterization of the vertical dust flux at emission. However, existing parameterizations of the vertical dust flux vary substantially in their scaling with wind friction velocity, require input parameters...
Parameterization of spectral distributions for pion and kaon production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Frank A.
1995-01-01
Accurate semi-empirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations depend on the outgoing meson momentum and also the proton energy, and are able to be reduced to very simple analytical formulas suitable for cosmic-ray transport.
The CCPP-ARM Parameterization Testbed (CAPT): Where Climate Simulation Meets Weather Prediction
Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J
2003-11-21
To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands, in particular, that the GCM parameterizations of unresolved processes should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provied that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be similarly tested. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the USDOE is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM. Numerical weather prediction methods show promise for improving parameterizations in climate GCMs.
NASA Astrophysics Data System (ADS)
Xie, Shi-Peng; Luo, Li-Min
2012-06-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.
NASA Astrophysics Data System (ADS)
Hammonds, Kevin Don
Through the analysis of scanning polarimetric W-band cloud radar data collected during STORMVEX, an algorithm has been developed to both identify and parameterize various ice crystal habits present within mixed-phase clouds. Armed with a unique dataset, the development of the algorithm took advantage of a slant 45° linear depolarization ratio (SLDR) measurement that was made as a function of the radar elevation angle when in range height indicator (RHI) scanning mode. This measurement technique proved to be invaluable in that it limited the influence of the particle's maximum dimension on the measured depolarization, which instead became more a function of the ice particle's shape. Validated through in situ measurements; pristine dendrites, lightly rimed dendrites, rimed stellar crystals, aggregates of dendrites, columns, and graupel particles were identified and matched with specific SLDR signatures. With a known ice particle habit and SLDR signature, the ice particle habit identification segment of the newly developed algorithm was then applied to the entire dataset consisting of 38,190 individual scans, in order to identify ice particle habits at a combined 849,745 range-heights and scanning angles. Through this analysis and the use of a chi-square test statistic, the predominant ice particle habit could be determined. Of primary interest in this study were the parameterizations of the ice particle mass and radar backscatter cross section. Through the modeling of the chosen ice particle habit as an oblate spheroid, these parameterizations were carried out in part by relying on previously published empirical studies as well as T-matrix scattering calculations of oblate spheroids composed of an ice/air mixture. Due to the computational expense of T-matrix calculations, however, a new T-matrix scaling factor was derived from the Clausius-Mossotti relation, which relates the refractive index of a material to its polarizability. With this scaling factor, new T
An Empirical Cumulus Parameterization Scheme for a Global Spectral Model
NASA Technical Reports Server (NTRS)
Rajendran, K.; Krishnamurti, T. N.; Misra, V.; Tao, W.-K.
2004-01-01
Realistic vertical heating and drying profiles in a cumulus scheme is important for obtaining accurate weather forecasts. A new empirical cumulus parameterization scheme based on a procedure to improve the vertical distribution of heating and moistening over the tropics is developed. The empirical cumulus parameterization scheme (ECPS) utilizes profiles of Tropical Rainfall Measuring Mission (TRMM) based heating and moistening derived from the European Centre for Medium- Range Weather Forecasts (ECMWF) analysis. A dimension reduction technique through rotated principal component analysis (RPCA) is performed on the vertical profiles of heating (Q1) and drying (Q2) over the convective regions of the tropics, to obtain the dominant modes of variability. Analysis suggests that most of the variance associated with the observed profiles can be explained by retaining the first three modes. The ECPS then applies a statistical approach in which Q1 and Q2 are expressed as a linear combination of the first three dominant principal components which distinctly explain variance in the troposphere as a function of the prevalent large-scale dynamics. The principal component (PC) score which quantifies the contribution of each PC to the corresponding loading profile is estimated through a multiple screening regression method which yields the PC score as a function of the large-scale variables. The profiles of Q1 and Q2 thus obtained are found to match well with the observed profiles. The impact of the ECPS is investigated in a series of short range (1-3 day) prediction experiments using the Florida State University global spectral model (FSUGSM, T126L14). Comparisons between short range ECPS forecasts and those with the modified Kuo scheme show a very marked improvement in the skill in ECPS forecasts. This improvement in the forecast skill with ECPS emphasizes the importance of incorporating realistic vertical distributions of heating and drying in the model cumulus scheme. This
Accuracy of cuticular resistance parameterizations in ammonia dry deposition models
NASA Astrophysics Data System (ADS)
Schrader, Frederik; Brümmer, Christian; Richter, Undine; Fléchard, Chris; Wichink Kruit, Roy; Erisman, Jan Willem
2016-04-01
Accurate representation of total reactive nitrogen (Nr) exchange between ecosystems and the atmosphere is a crucial part of modern air quality models. However, bi-directional exchange of ammonia (NH3), the dominant Nr species in agricultural landscapes, still poses a major source of uncertainty in these models, where especially the treatment of non-stomatal pathways (e.g. exchange with wet leaf surfaces or the ground layer) can be challenging. While complex dynamic leaf surface chemistry models have been shown to successfully reproduce measured ammonia fluxes on the field scale, computational restraints and the lack of necessary input data have so far limited their application in larger scale simulations. A variety of different approaches to modelling dry deposition to leaf surfaces with simplified steady-state parameterizations have therefore arisen in the recent literature. We present a performance assessment of selected cuticular resistance parameterizations by comparing them with ammonia deposition measurements by means of eddy covariance (EC) and the aerodynamic gradient method (AGM) at a number of semi-natural and grassland sites in Europe. First results indicate that using a state-of-the-art uni-directional approach tends to overestimate and using a bi-directional cuticular compensation point approach tends to underestimate cuticular resistance in some cases, consequently leading to systematic errors in the resulting flux estimates. Using the uni-directional model, situations where low ratios of total atmospheric acids to NH3 concentration occur lead to fairly high minimum cuticular resistances, limiting predicted downward fluxes in conditions usually favouring deposition. On the other hand, the bi-directional model used here features a seasonal cycle of external leaf surface emission potentials that can lead to comparably low effective resistance estimates under warm and wet conditions, when in practice an expected increase in the compensation point due to
Physically-Based Parameterization of Frozen Ground Processes in Watershed Runoff Modeling
NASA Astrophysics Data System (ADS)
Koren, V. I.
2004-05-01
parameters were used at all sites. Solid and liquid soil moisture contents, and soil temperature at five layers were simulated for 3-5 years. Test results suggest that a conceptual representation of soil moisture fluxes combined with a physically-based heat transfer model provides reasonable simulations of soil temperature for the entire soil profile. Ignoring soil moisture phase transitions can lead to significant biases of soil temperature. Simulated soil moisture states also agree well with measurements for the research watershed for an 18-year period. A second set of tests was performed for a few river basins when only outlet hydrographs were evaluated. A priori water balance model parameters were adjusted using automatic or manual calibration. Simulated and observed hydrographs agree better when the frozen ground parameterization was added specifically during transition periods from spring to summer. More importantly, the un-calibrated model with the frozen ground component outperforms the un-calibrated model with no frozen ground component for all tested basins. Spring floods analysis suggests also that it is impossible to remove runoff biases without modification of frozen ground hydraulic properties.
Frozen soil parameterization in a distributed biosphere hydrological model
NASA Astrophysics Data System (ADS)
Wang, L.; Koike, T.; Yang, K.; Jin, R.; Li, H.
2009-11-01
In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM). The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research). In the summer 2008, land surface parameters were optimized using the observed surface radiation fluxes and the soil temperature profile at the Dadongshu-Yakou (DY) station in July; and then soil hydraulic parameters were obtained by the calibration of the July soil moisture profile at the DY station and by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak of 2008. The calibrated WEB-DHM with the frozen scheme was then used for a yearlong simulation from 21 November 2007 to 20 November 2008, to check its performance in cold seasons. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the DY station and the discharges at the basin outlet in the yearlong simulation.
Frozen soil parameterization in a distributed biosphere hydrological model
NASA Astrophysics Data System (ADS)
Wang, L.; Koike, T.; Yang, K.; Jin, R.; Li, H.
2010-03-01
In this study, a frozen soil parameterization has been modified and incorporated into a distributed biosphere hydrological model (WEB-DHM). The WEB-DHM with the frozen scheme was then rigorously evaluated in a small cold area, the Binngou watershed, against the in-situ observations from the WATER (Watershed Allied Telemetry Experimental Research). First, by using the original WEB-DHM without the frozen scheme, the land surface parameters and two van Genuchten parameters were optimized using the observed surface radiation fluxes and the soil moistures at upper layers (5, 10 and 20 cm depths) at the DY station in July. Second, by using the WEB-DHM with the frozen scheme, two frozen soil parameters were calibrated using the observed soil temperature at 5 cm depth at the DY station from 21 November 2007 to 20 April 2008; while the other soil hydraulic parameters were optimized by the calibration of the discharges at the basin outlet in July and August that covers the annual largest flood peak in 2008. With these calibrated parameters, the WEB-DHM with the frozen scheme was then used for a yearlong validation from 21 November 2007 to 20 November 2008. Results showed that the WEB-DHM with the frozen scheme has given much better performance than the WEB-DHM without the frozen scheme, in the simulations of soil moisture profile at the cold regions catchment and the discharges at the basin outlet in the yearlong simulation.
Effective parameterizations of three nonwetting phase relative permeability models
NASA Astrophysics Data System (ADS)
Yang, Zhenlei; Mohanty, Binayak P.
2015-08-01
Describing convective nonwetting phase flow in unsaturated porous media requires knowledge of the nonwetting phase relative permeability. This study was conducted to formulate and derive a generalized expression for the nonwetting phase relative permeability via combining with the Kosugi water retention function. This generalized formulation is then used to flexibly investigate the Burdine, Mualem, and Alexander and Skaggs models' prediction accuracy for relative nonwetting phase permeability. The model and data comparison results show that these three permeability models, if used in their original form, but applied to the nonwetting phase, could not predict the experimental data well. The optimum pore tortuosity and connectivity value is thus obtained for the improved prediction of relative nonwetting phase permeability. As a result, the effective parameterization of (α,β,η) parameters in the modified Burdine, modified Mualem, and modified Alexander and Skaggs permeability models were found to be (2.5, 2, 1), (2, 1, 2), and (2.5, 1, 1), respectively. These three suggested models display the highest accuracy among the nine relative permeability models investigated in this study. However, the corresponding discontinuous nonwetting phase and the liquid film flow should be accounted for in future for the improved prediction of nonwetting phase relative permeability at very high and very low water saturation range, respectively.
Specialized Knowledge Representation and the Parameterization of Context.
Faber, Pamela; León-Araúz, Pilar
2016-01-01
Though instrumental in numerous disciplines, context has no universally accepted definition. In specialized knowledge resources it is timely and necessary to parameterize context with a view to more effectively facilitating knowledge representation, understanding, and acquisition, the main aims of terminological knowledge bases. This entails distinguishing different types of context as well as how they interact with each other. This is not a simple objective to achieve despite the fact that specialized discourse does not have as many contextual variables as those in general language (i.e., figurative meaning, irony, etc.). Even in specialized text, context is an extremely complex concept. In fact, contextual information can be specified in terms of scope or according to the type of information conveyed. It can be a textual excerpt or a whole document; a pragmatic convention or a whole culture; a concrete situation or a prototypical scenario. Although these versions of context are useful for the users of terminological resources, such resources rarely support context modeling. In this paper, we propose a taxonomy of context primarily based on scope (local and global) and further divided into syntactic, semantic, and pragmatic facets. These facets cover the specification of different types of terminological information, such as predicate-argument structure, collocations, semantic relations, term variants, grammatical and lexical cohesion, communicative situations, subject fields, and cultures. PMID:26941674
Sensitivity and Uncertainty in Detonation Shock Dynamics Parameterization
NASA Astrophysics Data System (ADS)
Chiquete, Carlos; Short, Mark; Jackson, Scott
2013-06-01
Detonation shock dynamics (DSD) is the timing component of an advanced programmed burn model of detonation propagation in high explosives (HE). In DSD theory, the detonation-driving zone is replaced with a propagating surface in which the surface normal velocity is a function of the local surface curvature, the so-called Dn - κ relation for the HE. This relation is calibrated by assuming a functional form relating Dn and κ, and then fitting the function parameters via minimization of a weighted error function of residuals based on shock-shape curves and a diameter effect curve. In general, for a given HE, the greater the available shock-shape data at different rate-stick radii, the less the uncertainty in the DSD fit. For a wide range of HEs, however, no shock shape data is available, and DSD calibrations must be based on diameter effect data alone. With this limited data, potentially large variations in the DSD parameters can occur that fit the diameter effect curve to within a given residual error. We explore uncertainty issues in DSD parameterization when limited calibration data is available and the implications of the resulting sensitivities in timing, highlighting differences between ideal, insensitive and non-ideal HEs such as Cyclotol, IMX-104 and ANFO.
Fire Detection and Parameterization with Msg-Seviri Sensor
NASA Astrophysics Data System (ADS)
Calle, A.; Casanova, J. L.; Moclán, C.; Romo, A.; Fraile, S.
2006-08-01
The detection of forest fires and the determination of their parameters has been a task usually carried out by polar-orbit sensors, AVHRR (A)ATSR, BIRD and MODIS mainly. However, their time resolution prevents them from operating in real time. On the other hand, the new geostationary sensors have very appropriate capacities for the observation of the Earth and for the monitoring of forest fires, as is being proved. GOES, MSG and MTSAT are already operative with time resolutions less than 30 minutes, 15 minutes for MSG, objective of this paper, and they have led the international community to think that the global observation network in real time may become a reality. The implementation of this network is the aim of the Global Observations of Forest Cover and Land Cover Dynamics (GOFC/GOLD) FIRE Mapping and Monitoring program, focused internationally on taking decisions concerning the research of the Global Change. In this paper, the operation in real time by the MSG- SEVIRI sensor over the Iberian Peninsula is carried out. Its capacity to detect hot forest fires smaller than 0.3 ha in Mediterranean latitudes has been analysed. Concerning fire parameterization two topics are analysed too: the possibility to use SWIR spectral channel in order to replace MIR in saturation situations and dependence of fire parameters versus the problem of resampling pixels.
Specialized Knowledge Representation and the Parameterization of Context
Faber, Pamela
2016-01-01
Though instrumental in numerous disciplines, context has no universally accepted definition. In specialized knowledge resources it is timely and necessary to parameterize context with a view to more effectively facilitating knowledge representation, understanding, and acquisition, the main aims of terminological knowledge bases. This entails distinguishing different types of context as well as how they interact with each other. This is not a simple objective to achieve despite the fact that specialized discourse does not have as many contextual variables as those in general language (i.e., figurative meaning, irony, etc.). Even in specialized text, context is an extremely complex concept. In fact, contextual information can be specified in terms of scope or according to the type of information conveyed. It can be a textual excerpt or a whole document; a pragmatic convention or a whole culture; a concrete situation or a prototypical scenario. Although these versions of context are useful for the users of terminological resources, such resources rarely support context modeling. In this paper, we propose a taxonomy of context primarily based on scope (local and global) and further divided into syntactic, semantic, and pragmatic facets. These facets cover the specification of different types of terminological information, such as predicate-argument structure, collocations, semantic relations, term variants, grammatical and lexical cohesion, communicative situations, subject fields, and cultures. PMID:26941674
Population models for passerine birds: structure, parameterization, and analysis
Noon, B.R.; Sauer, J.R.; McCullough, D.R.; Barrett, R.H.
1992-01-01
Population models have great potential as management tools, as they use infonnation about the life history of a species to summarize estimates of fecundity and survival into a description of population change. Models provide a framework for projecting future populations, determining the effects of management decisions on future population dynamics, evaluating extinction probabilities, and addressing a variety of questions of ecological and evolutionary interest. Even when insufficient information exists to allow complete identification of the model, the modelling procedure is useful because it forces the investigator to consider the life history of the species when determining what parameters should be estimated from field studies and provides a context for evaluating the relative importance of demographic parameters. Models have been little used in the study of the population dynamics of passerine birds because of: (1) widespread misunderstandings of the model structures and parameterizations, (2) a lack of knowledge of life histories of many species, (3) difficulties in obtaining statistically reliable estimates of demographic parameters for most passerine species, and (4) confusion about functional relationships among demographic parameters. As a result, studies of passerine demography are often designed inappropriately and fail to provide essential data. We review appropriate models for passerine bird populations and illustrate their possible uses in evaluating the effects of management or other environmental influences on population dynamics. We identify environmental influences on population dynamics. We identify parameters that must be estimated from field data, briefly review existing statistical methods for obtaining valid estimates, and evaluate the present status of knowledge of these parameters.
Parameterizations in high resolution isopycnal wind-driven ocean models
NASA Astrophysics Data System (ADS)
Jensen, T. G.; Randall, D. A.
1994-01-01
For the Computer Hardware Advanced Mathematics and Model Physics (CHAMMP) project, developing a new multilayer ocean model, based on the hydrodynamic FSU Indian Ocean model was proposed. The new model will include prognostic temperature and salinity and will be coded for massively parallel machines. Other specific objectives for the proposed research were to: incorporate a oceanic mixed layer on top of the isopycnal deep layers; implement positive definite scheme for advection; determine effects of islands on large scale flow; and investigate lateral boundary conditions for boundary layer currents. The mixed layer model is proposed to be of a bulk type with prognostic equations for temperature and salinity. Development of parallel code will be done in cooperation with other CHAMMP participants, mainly the ocean modelling group at LANL. The main objective is model development, while the application is to determine the influence and parameterization of narrow flows along continents and through chains of small islands on the large scale oceanic circulation. Test runs with artificial wind stress and heat flux will be used to determine model stability, performance, and optimization for the new model configuration. Tests will include western boundary currents, coastal upwelling, and equatorial dynamics. This report discusses project progress for period January 1, 1993 through December 31, 1993.
Systematic multiscale parameterization of heterogeneous elastic network models of proteins.
Lyman, Edward; Pfaendtner, Jim; Voth, Gregory A
2008-11-01
We present a method to parameterize heterogeneous elastic network models (heteroENMs) of proteins to reproduce the fluctuations observed in atomistic simulations. Because it is based on atomistic simulation, our method allows the development of elastic coarse-grained models of proteins under different conditions or in different environments. The method is simple and applicable to models at any level of coarse-graining. We validated the method in three systems. First, we computed the persistence length of ADP-bound F-actin, using a heteroENM model. The value of 6.1 +/- 1.6 microm is consistent with the experimentally measured value of 9.0 +/- 0.5 microm. We then compared our method to a uniform elastic network model and a realistic extension algorithm via covariance Hessian (REACH) model of carboxy myoglobin, and found that the heteroENM method more accurately predicted mean-square fluctuations of alpha-carbon atoms. Finally, we showed that the method captures critical differences in effective harmonic interactions for coarse-grained models of the N-terminal Bin/amphiphysin/Rvs (N-BAR) domain of amphiphysin, by building models of N-BAR both bound to a membrane and free in solution.
Factors influencing the parameterization of tropical anvils within GCMs
Bradley, M.M.; Chin, H.N.S.
1994-03-01
The overall goal of this project is to improve the representation of anvil clouds and their effects in general circulation models (GCMs). We have concentrated on an important portion of the overall goal; the evolution of cumulus-generated anvil clouds and their effects on the large-scale environment. Because of the large range of spatial and temporal scales involved, we have been using a multi-scale approach. For the early-time generation and development of the citrus anvil we are using a cloud-scale model with a horizontal resolution of 1-2 kilometers, while for the transport of anvils by the large-scale flow we are using a mesoscale model with a horizontal resolution of 10-40 kilometers. The eventual goal is to use the information obtained from these simulations, together with available observations to develop an improved cloud parameterization for use in GCMS. The cloud-scale simulation of a midlatitude squall line case and the mesoscale study of a tropical anvil using an anvil generator were presented at the last ARM science team meeting. This paper concentrates on the cloud-scale study of a tropical squall line. Results are compared with its midlatitude counterparts to further our understanding of the formation mechanism of anvil clouds and the sensitivity of radiation to their optical properties.
Kuo-Nan Liou
2003-12-29
OAK-B135 (a) We developed a 3D radiative transfer model to simulate the transfer of solar and thermal infrared radiation in inhomogeneous cirrus clouds. The model utilized a diffusion approximation approach (four-term expansion in the intensity) employing Cartesian coordinates. The required single-scattering parameters, including the extinction coefficient, single-scattering albedo, and asymmetry factor, for input to the model, were parameterized in terms of the ice water content and mean effective ice crystal size. The incorporation of gaseous absorption in multiple scattering atmospheres was accomplished by means of the correlated k-distribution approach. In addition, the strong forward diffraction nature in the phase function was accounted for in each predivided spatial grid based on a delta-function adjustment. The radiation parameterization developed herein is applied to potential cloud configurations generated from GCMs to investigate broken clouds and cloud-overlapping effects on the domain-averaged heating rate. Cloud inhomogeneity plays an important role in the determination of flux and heating rate distributions. Clouds with maximum overlap tend to produce less heating than those with random overlap. Broken clouds show more solar heating as well as more IR cooling as compared to a continuous cloud field (Gu and Liou, 2001). (b) We incorporated a contemporary radiation parameterization scheme in the UCLA atmospheric GCM in collaboration with the UCLA GCM group. In conjunction with the cloud/radiation process studies, we developed a physically-based cloud cover formation scheme in association with radiation calculations. The model clouds were first vertically grouped in terms of low, middle, and high types. Maximum overlap was then used for each cloud type, followed by random overlap among the three cloud types. Fu and Liou's 1D radiation code with modification was subsequently employed for pixel-by-pixel radiation calculations in the UCLA GCM. We showed
Winter QPF Sensitivities to Snow Parameterizations and Comparisons to NASA CloudSat Observations
NASA Technical Reports Server (NTRS)
Molthan, Andrew; Haynes, John M.; Jedlovec, Gary J.; Lapenta, William M.
2009-01-01
Steady increases in computing power have allowed for numerical weather prediction models to be initialized and run at high spatial resolution, permitting a transition from larger scale parameterizations of the effects of clouds and precipitation to the simulation of specific microphysical processes and hydrometeor size distributions. Although still relatively coarse in comparison to true cloud resolving models, these high resolution forecasts (on the order of 4 km or less) have demonstrated value in the prediction of severe storm mode and evolution and are being explored for use in winter weather events . Several single-moment bulk water microphysics schemes are available within the latest release of the Weather Research and Forecast (WRF) model suite, including the NASA Goddard Cumulus Ensemble, which incorporate some assumptions in the size distribution of a small number of hydrometeor classes in order to predict their evolution, advection and precipitation within the forecast domain. Although many of these schemes produce similar forecasts of events on the synoptic scale, there are often significant details regarding precipitation and cloud cover, as well as the distribution of water mass among the constituent hydrometeor classes. Unfortunately, validating data for cloud resolving model simulations are sparse. Field campaigns require in-cloud measurements of hydrometeors from aircraft in coordination with extensive and coincident ground based measurements. Radar remote sensing is utilized to detect the spatial coverage and structure of precipitation. Here, two radar systems characterize the structure of winter precipitation for comparison to equivalent features within a forecast model: a 3 GHz, Weather Surveillance Radar-1988 Doppler (WSR-88D) based in Omaha, Nebraska, and the 94 GHz NASA CloudSat Cloud Profiling Radar, a spaceborne instrument and member of the afternoon or "A-Train" of polar orbiting satellites tasked with cataloguing global cloud
Kao, C.Y.J.; Bossert, J.E.; Winterkamp, J.; Lai, C.C.
1993-10-01
One of the objectives of the DOE ARM Program is to improve the parameterization of clouds in general circulation models (GCMs). The approach taken in this research is two fold. We first examine the behavior of cumulus parameterization schemes by comparing their performance against the results from explicit cloud simulations with state-of-the-art microphysics. This is conducted in a two-dimensional (2-D) configuration of an idealized convective system. We then apply the cumulus parameterization schemes to realistic three-dimensional (3-D) simulations over the western US for a case with an enormous amount of convection in an extended period of five days. In the 2-D idealized tests, cloud effects are parameterized in the ``parameterization cases`` with a coarse resolution, whereas each cloud is explicitly resolved by the ``microphysics cases`` with a much finer resolution. Thus, the capability of the parameterization schemes in reproducing the growth and life cycle of a convective system can then be evaluated. These 2-D tests will form the basis for further 3-D realistic simulations which have the model resolution equivalent to that of the next generation of GCMs. Two cumulus parameterizations are used in this research: the Arakawa-Schubert (A-S) scheme (Arakawa and Schubert, 1974) used in Kao and Ogura (1987) and the Kuo scheme (Kuo, 1974) used in Tremback (1990). The numerical model used in this research is the Regional Atmospheric Modeling System (RAMS) developed at Colorado State University (CSU).
Evaluation of Warm-Rain Microphysical Parameterizations in Cloudy Boundary Layer Transitions
NASA Astrophysics Data System (ADS)
Nelson, K.; Mechem, D. B.
2014-12-01
Common warm-rain microphysical parameterizations used for marine boundary layer (MBL) clouds are either tuned for specific cloud types (e.g., the Khairoutdinov and Kogan 2000 parameterization, "KK2000") or are altogether ill-posed (Kessler 1969). An ideal microphysical parameterization should be "unified" in the sense of being suitable across MBL cloud regimes that include stratocumulus, cumulus rising into stratocumulus, and shallow trade cumulus. The recent parameterization of Kogan (2013, "K2013") was formulated for shallow cumulus but has been shown in a large-eddy simulation environment to work quite well for stratocumulus as well. We report on our efforts to implement and test this parameterization into a regional forecast model (NRL COAMPS). Results from K2013 and KK2000 are compared with the operational Kessler parameterization for a 5-day period of the VOCALS-REx field campaign, which took place over the southeast Pacific. We focus on both the relative performance of the three parameterizations and also on how they compare to the VOCALS-REx observations from the NOAA R/V Ronald H. Brown, in particular estimates of boundary-layer depth, liquid water path (LWP), cloud base, and area-mean precipitation rate obtained from C-band radar.
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; et al
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Model computational expense is estimated, and sensitivity to the number of subcolumns is investigated. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in shortwave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation.« less
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; et al
2015-06-30
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into amore » microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.« less
NASA Astrophysics Data System (ADS)
Khan, T.; Agnan, Y.; Obrist, D.; Selin, N. E.; Urban, N. R.; Wu, S.; Perlinger, J. A.
2015-12-01
Inadequate representation of process-based mechanisms of exchange behavior of elemental mercury (Hg0) and decoupled treatment of deposition and emission are two major limitations of parameterizations of atmosphere-surface exchange flux commonly incorporated into chemical transport models (CTMs). Of nineteen CTMs for Hg0 exchange we reviewed (ten global, nine regional), eight global and seven regional models have decoupled treatment of Hg0 deposition and emission, two global models include no parameterization to account for emission, and the remaining two regional models include coupled deposition and emission parameterizations (i.e., net atmosphere-surface exchange). The performance of atmosphere-surface exchange parameterizations in CTMs depends on parameterization uncertainty (in terms of both accuracy and precision) and feasibility of implementation. We provide a comparison of the performance of three available parameterizations of net atmosphere-surface exchange. To evaluate parameterization accuracy, we compare predicted exchange fluxes to field measurements conducted over a variety of surfaces compiled in a recently developed global database of terrestrial Hg0 surface-atmosphere exchange flux measurements. To assess precision, we estimate the sensitivity of predicted fluxes to the imprecision in parameter input values, and compare this sensitivity to that derived from analysis of the global Hg0 flux database. Feasibility of implementation is evaluated according to the availability of input parameters, computational requirements, and the adequacy of uncertainty representation. Based on this assessment, we provide suggestions for improved treatment of Hg0 net exchange processes in CTMs.
[Formula: see text] regularity properties of singular parameterizations in isogeometric analysis.
Takacs, T; Jüttler, B
2012-11-01
Isogeometric analysis (IGA) is a numerical simulation method which is directly based on the NURBS-based representation of CAD models. It exploits the tensor-product structure of 2- or 3-dimensional NURBS objects to parameterize the physical domain. Hence the physical domain is parameterized with respect to a rectangle or to a cube. Consequently, singularly parameterized NURBS surfaces and NURBS volumes are needed in order to represent non-quadrangular or non-hexahedral domains without splitting, thereby producing a very compact and convenient representation. The Galerkin projection introduces finite-dimensional spaces of test functions in the weak formulation of partial differential equations. In particular, the test functions used in isogeometric analysis are obtained by composing the inverse of the domain parameterization with the NURBS basis functions. In the case of singular parameterizations, however, some of the resulting test functions do not necessarily fulfill the required regularity properties. Consequently, numerical methods for the solution of partial differential equations cannot be applied properly. We discuss the regularity properties of the test functions. For one- and two-dimensional domains we consider several important classes of singularities of NURBS parameterizations. For specific cases we derive additional conditions which guarantee the regularity of the test functions. In addition we present a modification scheme for the discretized function space in case of insufficient regularity. It is also shown how these results can be applied for computational domains in higher dimensions that can be parameterized via sweeping.
Phillips, T J; Potter, G L; Williamson, D L; Cederwall, R T; Boyle, J S; Fiorino, M; Hnilo, J J; Olson, J G; Xie, S; Yio, J J
2004-05-06
To significantly improve the simulation of climate by general circulation models (GCMs), systematic errors in representations of relevant processes must first be identified, and then reduced. This endeavor demands that the GCM parameterizations of unresolved processes, in particular, should be tested over a wide range of time scales, not just in climate simulations. Thus, a numerical weather prediction (NWP) methodology for evaluating model parameterizations and gaining insights into their behavior may prove useful, provided that suitable adaptations are made for implementation in climate GCMs. This method entails the generation of short-range weather forecasts by a realistically initialized climate GCM, and the application of six-hourly NWP analyses and observations of parameterized variables to evaluate these forecasts. The behavior of the parameterizations in such a weather-forecasting framework can provide insights on how these schemes might be improved, and modified parameterizations then can be tested in the same framework. In order to further this method for evaluating and analyzing parameterizations in climate GCMs, the U.S. Department of Energy is funding a joint venture of its Climate Change Prediction Program (CCPP) and Atmospheric Radiation Measurement (ARM) Program: the CCPP-ARM Parameterization Testbed (CAPT). This article elaborates the scientific rationale for CAPT, discusses technical aspects of its methodology, and presents examples of its implementation in a representative climate GCM.
NASA Astrophysics Data System (ADS)
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; Weber, J. K.; Larson, V. E.; Wyant, M. C.; Wang, M.; Guo, Z.; Ghan, S. J.
2015-12-01
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Model computational expense is estimated, and sensitivity to the number of subcolumns is investigated. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in shortwave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation.
NASA Astrophysics Data System (ADS)
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P. A.; Chen, C.-C.; Morrison, H.; Höft, J.; Raut, E.; Griffin, B. M.; Weber, J. K.; Larson, V. E.; Wyant, M. C.; Wang, M.; Guo, Z.; Ghan, S. J.
2015-06-01
Most global climate models parameterize separate cloud types using separate parameterizations. This approach has several disadvantages, including obscure interactions between parameterizations and inaccurate triggering of cumulus parameterizations. Alternatively, a unified cloud parameterization uses one equation set to represent all cloud types. Such cloud types include stratiform liquid and ice cloud, shallow convective cloud, and deep convective cloud. Vital to the success of a unified parameterization is a general interface between clouds and microphysics. One such interface involves drawing Monte Carlo samples of subgrid variability of temperature, water vapor, cloud liquid, and cloud ice, and feeding the sample points into a microphysics scheme. This study evaluates a unified cloud parameterization and a Monte Carlo microphysics interface that has been implemented in the Community Atmosphere Model (CAM) version 5.3. Results describing the mean climate and tropical variability from global simulations are presented. The new model shows a degradation in precipitation skill but improvements in short-wave cloud forcing, liquid water path, long-wave cloud forcing, precipitable water, and tropical wave simulation. Also presented are estimations of computational expense and investigation of sensitivity to number of subcolumns.
Sorg, T.J.
1991-01-01
The U.S. Environmental Protection Agency proposed new and revised regulations on radionuclide contaminants in drinking water in June 1991. During the 1980's, the Drinking Water Research Division, USEPA conducted a research program to evaluate various technologies to remove radium, uranium and radon from drinking water. The research consisted of laboratory and field studies conducted by USEPA, universities and consultants. The paper summarizes the results of the most significant projects completed. General information is also presented on the general chemistry of the three radionuclides. The information presented indicates that the most practical treatment methods for radium are ion exchange and lime-soda softening and reverse osmosis. The methods tested for radon are aeration and granular activated carbon and the methods for uranium are anion exchange and reverse osmosis.
Cross-Section Parameterizations for Pion and Nucleon Production From Negative Pion-Proton Collisions
NASA Technical Reports Server (NTRS)
Norbury, John W.; Blattnig, Steve R.; Norman, Ryan; Tripathi, R. K.
2002-01-01
Ranft has provided parameterizations of Lorentz invariant differential cross sections for pion and nucleon production in pion-proton collisions that are compared to some recent data. The Ranft parameterizations are then numerically integrated to form spectral and total cross sections. These numerical integrations are further parameterized to provide formula for spectral and total cross sections suitable for use in radiation transport codes. The reactions analyzed are for charged pions in the initial state and both charged and neutral pions in the final state.
Improvement of the GEOS-5 AGCM upon Updating the Air-Sea Roughness Parameterization
NASA Technical Reports Server (NTRS)
Garfinkel, C. I.; Molod, A.; Oman, L. D.; Song, I.-S.
2011-01-01
The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.
Improvement of the GEOS-5 AGCM upon updating the air-sea roughness parameterization
NASA Astrophysics Data System (ADS)
Garfinkel, C. I.; Molod, A. M.; Oman, L. D.; Song, I.-S.
2011-09-01
The impact of an air-sea roughness parameterization over the ocean that more closely matches recent observations of air-sea exchange is examined in the NASA Goddard Earth Observing System, version 5 (GEOS-5) atmospheric general circulation model. Surface wind biases in the GEOS-5 AGCM are decreased by up to 1.2m/s. The new parameterization also has implications aloft as improvements extend into the stratosphere. Many other GCMs (both for operational weather forecasting and climate) use a similar class of parameterization for their air-sea roughness scheme. We therefore expect that results from GEOS-5 are relevant to other models as well.
Towards a parameterization of convective wind gusts in Sahel
NASA Astrophysics Data System (ADS)
Largeron, Yann; Guichard, Françoise; Bouniol, Dominique; Couvreux, Fleur; Birch, Cathryn; Beucher, Florent
2014-05-01
] who focused on the wet tropical Pacific region, and linked wind gusts to convective precipitation rates alone, here, we also analyse the subgrid wind distribution during convective events, and quantify the statistical moments (variance, skewness and kurtosis) in terms of mean wind speed and convective indexes such as DCAPE. Next step of the work will be to formulate a parameterization of the cold pool convective gust from those probability density functions and analytical formulaes obtained from basic energy budget models. References : [Carslaw et al., 2010] A review of natural aerosol interactions and feedbacks within the earth system. Atmospheric Chemistry and Physics, 10(4):1701{1737. [Engelstaedter et al., 2006] North african dust emissions and transport. Earth-Science Reviews, 79(1):73{100. [Knippertz and Todd, 2012] Mineral dust aerosols over the sahara: Meteorological controls on emission and transport and implications for modeling. Reviews of Geophysics, 50(1). [Marsham et al., 2011] The importance of the representation of deep convection for modeled dust-generating winds over west africa during summer.Geophysical Research Letters, 38(16). [Marticorena and Bergametti, 1995] Modeling the atmospheric dust cycle: 1. design of a soil-derived dust emission scheme. Journal of Geophysical Research, 100(D8):16415{16. [Menut, 2008] Sensitivity of hourly saharan dust emissions to ncep and ecmwf modeled wind speed. Journal of Geophysical Research: Atmospheres (1984{2012), 113(D16). [Pierre et al., 2012] Impact of vegetation and soil moisture seasonal dynamics on dust emissions over the sahel. Journal of Geophysical Research: Atmospheres (1984{2012), 117(D6). [Redelsperger et al., 2000] A parameterization of mesoscale enhancement of surface fluxes for large-scale models. Journal of climate, 13(2):402{421.
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Approximations for photoelectron scattering
NASA Astrophysics Data System (ADS)
Fritzsche, V.
1989-04-01
The errors of several approximations in the theoretical approach of photoelectron scattering are systematically studied, in tungsten, for electron energies ranging from 10 to 1000 eV. The large inaccuracies of the plane-wave approximation (PWA) are substantially reduced by means of effective scattering amplitudes in the modified small-scattering-centre approximation (MSSCA). The reduced angular momentum expansion (RAME) is so accurate that it allows reliable calculations of multiple-scattering contributions for all the energies considered.
Cirrus cloud model parameterizations: Incorporating realistic ice particle generation
NASA Technical Reports Server (NTRS)
Sassen, Kenneth; Dodd, G. C.; Starr, David OC.
1990-01-01
Recent cirrus cloud modeling studies have involved the application of a time-dependent, two dimensional Eulerian model, with generalized cloud microphysical parameterizations drawn from experimental findings. For computing the ice versus vapor phase changes, the ice mass content is linked to the maintenance of a relative humidity with respect to ice (RHI) of 105 percent; ice growth occurs both with regard to the introduction of new particles and the growth of existing particles. In a simplified cloud model designed to investigate the basic role of various physical processes in the growth and maintenance of cirrus clouds, these parametric relations are justifiable. In comparison, the one dimensional cloud microphysical model recently applied to evaluating the nucleation and growth of ice crystals in cirrus clouds explicitly treated populations of haze and cloud droplets, and ice crystals. Although these two modeling approaches are clearly incompatible, the goal of the present numerical study is to develop a parametric treatment of new ice particle generation, on the basis of detailed microphysical model findings, for incorporation into improved cirrus growth models. For example, the relation between temperature and the relative humidity required to generate ice crystals from ammonium sulfate haze droplets, whose probability of freezing through the homogeneous nucleation mode are a combined function of time and droplet molality, volume, and temperature. As an example of this approach, the results of cloud microphysical simulations are presented showing the rather narrow domain in the temperature/humidity field where new ice crystals can be generated. The microphysical simulations point out the need for detailed CCN studies at cirrus altitudes and haze droplet measurements within cirrus clouds, but also suggest that a relatively simple treatment of ice particle generation, which includes cloud chemistry, can be incorporated into cirrus cloud growth.
Parameterization of a ruminant model of phosphorus digestion and metabolism.
Feng, X; Knowlton, K F; Hanigan, M D
2015-10-01
The objective of the current work was to parameterize the digestive elements of the model of Hill et al. (2008) using data collected from animals that were ruminally, duodenally, and ileally cannulated, thereby providing a better understanding of the digestion and metabolism of P fractions in growing and lactating cattle. The model of Hill et al. (2008) was fitted and evaluated for adequacy using the data from 6 animal studies. We hypothesized that sufficient data would be available to estimate P digestion and metabolism parameters and that these parameters would be sufficient to derive P bioavailabilities of a range of feed ingredients. Inputs to the model were dry matter intake; total feed P concentration (fPtFd); phytate (Pp), organic (Po), and inorganic (Pi) P as fractions of total P (fPpPt, fPoPt, fPiPt); microbial growth; amount of Pi and Pp infused into the omasum or ileum; milk yield; and BW. The available data were sufficient to derive all model parameters of interest. The final model predicted that given 75 g/d of total P input, the total-tract digestibility of P was 40.8%, Pp digestibility in the rumen was 92.4%, and in the total-tract was 94.7%. Blood P recycling to the rumen was a major source of Pi flow into the small intestine, and the primary route of excretion. A large proportion of Pi flowing to the small intestine was absorbed; however, additional Pi was absorbed from the large intestine (3.15%). Absorption of Pi from the small intestine was regulated, and given the large flux of salivary P recycling, the effective fractional small intestine absorption of available P derived from the diet was 41.6% at requirements. Milk synthesis used 16% of total absorbed P, and less than 1% was excreted in urine. The resulting model could be used to derive P bioavailabilities of commonly used feedstuffs in cattle production.
Evapotranspiration parameterizations at a grass site in Florida, USA
Rizou, M.; Sumner, David M.; Nnadi, F.
2007-01-01
In spite of the fact that grasslands account for about 40% of the ice-free global terrestrial land cover, their contribution to the surface exchanges of energy and water in local and regional scale is so far uncertain. In this study, the sensitivity of evapotranspiration (ET) and other energy fluxes to wetness variables, namely the volumetric Soil Water Content (SWC) and Antecedent Precipitation Index (API), over a non-irrigated grass site in Central Florida, USA (28.049 N, 81.400 W) were investigated. Eddy correlation and soil water content measurements were taken by USGS (U.S. Geological Survey) at the grass study site, within 100 m of a SFWMD (South Florida Water Management District) weather station. The soil is composed of fine sands and it is mainly covered by Paspalum notatum (bahia grass). Variable soil wetness conditions with API bounds of about 2 to 160 mm and water table levels of 0.03 to 1.22 m below ground surface, respectively, were observed throughout the year 2004. The Bowen ratio exhibited an average of 1 and values larger than 2 during few dry days. The daytime average ET was classified into two stages, first stage (energy-limited) and second stage (water- limited) based on the water availability. The critical values of API and SWC were found to be about 56 mm and 0.17 respectively, with the second one being approximately 33% of the SWC at saturation. The ET values estimated by the simple Priestley-Taylor (PT) method were compared to the actual values. The PT coefficient varied from a low bound of approximately 0.4 to a peak of 1.21. Simple relationships for the PT empirical factor were employed in terms of SWC and API to improve the accuracy of the second stage observations. The results of the ET parameterizations closely match eddy-covariance flux values on daily and longer time steps.
Parameterization of small intestinal water volume using PBPK modeling.
Maharaj, Anil; Fotaki, Nikoletta; Edginton, Andrea
2015-01-25
To facilitate accurate predictions of oral drug disposition, mechanistic absorption models require optimal parameterization. Furthermore, parameters should maintain a biological basis to establish confidence in model predictions. This study will serve to calculate an optimal parameter value for small intestinal water volume (SIWV) using a model-based approach. To evaluate physiologic fidelity, derived volume estimates will be compared to experimentally-based SIWV determinations. A compartmental absorption and transit (CAT) model, created in Matlab-Simulink®, was integrated with a whole-body PBPK model, developed in PK-SIM 5.2®, to provide predictions of systemic drug disposition. SIWV within the CAT model was varied between 52.5mL and 420mL. Simulations incorporating specific SIWV values were compared to pharmacokinetic data from compounds exhibiting solubility induced non-proportional changes in absorption using absolute average fold-error. Correspondingly, data pertaining to oral administration of acyclovir and chlorothiazide were utilized to derive estimates of SIWV. At 400mg, a SIWV of 116mL provided the best estimates of acyclovir plasma concentrations. A similar SIWV was found to best depict the urinary excretion pattern of chlorothiazide at a dose of 100mg. In comparison, experimentally-based estimates of SIWV within adults denote a central tendency between 86 and 167mL. The derived SIWV (116mL) represents the optimal parameter value within the context of the developed CAT model. This result demonstrates the biological basis of the widely utilized CAT model as in vivo SIWV determinations correspond with model-based estimates.
A parameterization of nuclear track profiles in CR-39 detector
NASA Astrophysics Data System (ADS)
Azooz, A. A.; Al-Nia'emi, S. H.; Al-Jubbori, M. A.
2012-11-01
In this work, the empirical parameterization describing the alpha particles’ track depth in CR-39 detectors is extended to describe longitudinal track profiles against etching time for protons and alpha particles. MATLAB based software is developed for this purpose. The software calculates and plots the depth, diameter, range, residual range, saturation time, and etch rate versus etching time. The software predictions are compared with other experimental data and with results of calculations using the original software, TRACK_TEST, developed for alpha track calculations. The software related to this work is freely downloadable and performs calculations for protons in addition to alpha particles. Program summary Program title: CR39 Catalog identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Copyright (c) 2011, Aasim Azooz Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met • Redistributions of source code must retain the above copyright, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and
A parameterization of nuclear track profiles in CR-39 detector
NASA Astrophysics Data System (ADS)
Azooz, A. A.; Al-Nia'emi, S. H.; Al-Jubbori, M. A.
2012-11-01
In this work, the empirical parameterization describing the alpha particles’ track depth in CR-39 detectors is extended to describe longitudinal track profiles against etching time for protons and alpha particles. MATLAB based software is developed for this purpose. The software calculates and plots the depth, diameter, range, residual range, saturation time, and etch rate versus etching time. The software predictions are compared with other experimental data and with results of calculations using the original software, TRACK_TEST, developed for alpha track calculations. The software related to this work is freely downloadable and performs calculations for protons in addition to alpha particles. Program summary Program title: CR39 Catalog identifier: AENA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENA_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Copyright (c) 2011, Aasim Azooz Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met • Redistributions of source code must retain the above copyright, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and
Terahertz scattering by granular composite materials: An effective medium theory
NASA Astrophysics Data System (ADS)
Kaushik, Mayank; Ng, Brian W.-H.; Fischer, Bernd M.; Abbott, Derek
2012-01-01
Terahertz (THz) spectroscopy and imaging have emerged as important tools for identification and classification of various substances, which exhibit absorption characteristics at distinct frequencies in the THz range. The spectral fingerprints can potentially be distorted or obscured by electromagnetic scattering caused by the granular nature of some substances. In this paper, we present THz time domain transmission measurements of granular polyethylene powders in order to investigate an effective medium theory that yields a parameterized model, which can be used to estimate the empirical measurements to good accuracy.
Parameterizing Aggregation Rates: Results of cold temperature ice-ash hydrometeor experiments
NASA Astrophysics Data System (ADS)
Courtland, L. M.; Dufek, J.; Mendez, J. S.; McAdams, J.
2014-12-01
Recent advances in the study of tephra aggregation have indicated that (i) far-field effects of tephra sedimentation are not adequately resolved without accounting for aggregation processes that preferentially remove the fine ash fraction of volcanic ejecta from the atmosphere as constituent pieces of larger particles, and (ii) the environmental conditions (e.g. humidity, temperature) prevalent in volcanic plumes may significantly alter the types of aggregation processes at work in different regions of the volcanic plume. The current research extends these findings to explore the role of ice-ash hydrometeor aggregation in various plume environments. Laboratory experiments utilizing an ice nucleation chamber allow us to parameterize tephra aggregation rates under the cold (0 to -50 C) conditions prevalent in the upper regions of volcanic plumes. We consider the interaction of ice-coated tephra of variable thickness grown in a controlled environment. The ice-ash hydrometers interact collisionally and the interaction is recorded by a number of instruments, including high speed video to determine if aggregation occurs. The electric charge on individual particles is examined before and after collision to examine the role of electrostatics in the aggregation process and to examine the charge exchange process. We are able to examine how sticking efficiency is related to both the relative abundance of ice on a particle as well as to the magnitude of the charge carried by the hydrometeor. We here present preliminary results of these experiments, the first to constrain aggregation efficiency of ice-ash hydrometeors, a parameter that will allow tephra dispersion models to use near-real-time meteorological data to better forecast particle residence time in the atmosphere.
NASA Astrophysics Data System (ADS)
Maloney, Eric D.; Hartmann, Dennis L.
2001-05-01
The National Center for Atmospheric Research (NCAR) Community Climate Model, version 3.6 (CCM3) simulation of tropical intraseasonal variability in zonal winds and precipitation can be improved by implementing the microphysics of cloud with relaxed Arakawa-Schubert (McRAS) convection scheme of Sud and Walker. The default CCM3 convection scheme of Zhang and McFarlane produces intraseasonal variability in both zonal winds and precipitation that is much lower than is observed. The convection scheme of Hack produces high tropical intraseasonal zonal wind variability but no coherent convective variability at intraseasonal timescales and low wavenumbers. The McRAS convection scheme produces realistic variability in tropical intraseasonal zonal winds and improved intraseasonal variability in tropical precipitation, although the variability in precipitation is somewhat less than is observed. Intraseasonal variability in CCM3 with the McRAS scheme is highly sensitive to the parameterization of convective precipitation evaporation in unsaturated environmental air and unsaturated downdrafts. Removing these effects greatly reduces intraseasonal variability in the model. Convective evaporation processes in McRAS affect intraseasonal variability mainly through their time-mean effects and not through their variations. Convective rain evaporation and unsaturated downdrafts improve the modeled specific humidity and temperature climates of the Tropics and increase convection on the equator. Intraseasonal variability in CCM3 with McRAS is not improved by increasing the boundary layer relative humidity threshold for initiation of convection, contrary to the results of Wang and Schlesinger. In fact, intraseasonal variability is reduced for higher thresholds. The largest intraseasonal moisture variations during a model Madden-Julian oscillation life cycle occur above the boundary layer, and humidity variations within the boundary layer are small.
Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model
This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.
The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...
NASA Technical Reports Server (NTRS)
Genthon, Christophe; Le Treut, Herve; Sadourny, Robert; Jouzel, Jean
1990-01-01
A Charney-Branscome based parameterization has been tested as a way of representing the eddy sensible heat transports missing in a zonally averaged dynamic model (ZADM) of the atmosphere. The ZADM used is a zonally averaged version of a general circulation model (GCM). The parameterized transports in the ZADM are gaged against the corresponding fluxes explicitly simulated in the GCM, using the same zonally averaged boundary conditions in both models. The Charney-Branscome approach neglects stationary eddies and transient barotropic disturbances and relies on a set of simplifying assumptions, including the linear appoximation, to describe growing transient baroclinic eddies. Nevertheless, fairly satisfactory results are obtained when the parameterization is performed interactively with the model. Compared with noninteractive tests, a very efficient restoring feedback effect between the modeled zonal-mean climate and the parameterized meridional eddy transport is identified.
Following the examination and evaluation of 12 nucleation parameterizations presented in part 1, 11 of them representing binary, ternary, kinetic, and cluster‐activated nucleation theories are evaluated in the U.S. Environmental Protection Agency Community Multiscale Air Quality ...
Parameterized Cross Sections for Pion Production in Proton-Proton Collisions
NASA Technical Reports Server (NTRS)
Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.
2000-01-01
An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.
Impact of Multiple Scattering on Infrared Radiative Transfer involving Ice Clouds
NASA Astrophysics Data System (ADS)
Kuo, C. P.; Yang, P.; Huang, X.; Feldman, D.; Flanner, M.
2015-12-01
General circulation models (GCMs) facilitate a major tool to investigate climate on global scale. Since solar and terrestrial radiation control energy budget of global climate, developing an accurate yet computationally efficient radiative transfer model in GCMs is important. However, in most of the GCMs, absorption of ice cloud is the only mechanism considered for the longwave radiative transfer process. Implementation of longwave scattering in GCMs requires parameterizations of ice cloud. This study utilizes spectrally consistent ice particle model in MODIS collection 6 and more than 14,000 particle size distributions from aircraft in-situ observations to parameterize ice cloud longwave optical properties. The new parameterizations are compared with Fu-Liou parameterization implemented in the RRTM_LW (Longwave Rapid Radiative Transfer Model). As accurate and computationally efficient radiative transfer model is important in GCMs, comparison of different radiative transfer methods are performed. Specifically, RRTMG_LW (GCM version of RRTM_LW), one of the most widely utilized radiative transfer schemes in the GCMs, will be modified to include different scattering approximation methods. To evaluate the accuracy, DISORT (Discrete Ordinates Radiative Transfer Program for a Multi-Layered Plane-Parallel Medium) is implemented and compared with other methods in terms of cloud radiative effect and heating rate.
Scattering in optical materials
Musikant, S.
1983-01-01
Topics discussed include internal scattering and surface scattering, environmental effects, and various applications. Papers are presented on scattering in ZnSe laser windows, the far-infrared reflectance spectra of optical black coatings, the effects of standard optical shop practices on scattering, and the damage susceptibility of ring laser gyro class optics. Attention is also given to the infrared laser stimulated desorption of pyridine from silver surfaces, to electrically conductive black optical paint, to light scattering from an interface bubble, and to the role of diagnostic testing in identifying and resolving dimensional stability problems in electroplated laser mirrors.
A scheme for parameterizing ice cloud water content in general circulation models
NASA Technical Reports Server (NTRS)
Heymsfield, Andrew J.; Donner, Leo J.
1989-01-01
A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.
Parameterized spectral distributions for meson production in proton-proton collisions
NASA Technical Reports Server (NTRS)
Schneider, John P.; Norbury, John W.; Cucinotta, Francis A.
1995-01-01
Accurate semiempirical parameterizations of the energy-differential cross sections for charged pion and kaon production from proton-proton collisions are presented at energies relevant to cosmic rays. The parameterizations, which depend on both the outgoing meson parallel momentum and the incident proton kinetic energy, are able to be reduced to very simple analytical formulas suitable for cosmic ray transport through spacecraft walls, interstellar space, the atmosphere, and meteorites.
Universal statistics of the scattering coefficient of chaotic microwave cavities
Hemmady, Sameer; Zheng, Xing; Antonsen, Thomas M. Jr.; Ott, Edward; Anlage, Steven M.
2005-05-01
We consider the statistics of the scattering coefficient S of a chaotic microwave cavity coupled to a single port. We remove the nonuniversal effects of the coupling from the experimental S data using the radiation impedance obtained directly from the experiments. We thus obtain the normalized scattering coefficient whose probability density function (PDF) is predicted to be universal in that it depends only on the loss (quality factor) of the cavity. We compare experimental PDFs of the normalized scattering coefficients with those obtained from random matrix theory (RMT), and find excellent agreement. The results apply to scattering measurements on any wave chaotic system.
Molecular graphene under the eye of scattering theory
NASA Astrophysics Data System (ADS)
Hammar, H.; Berggren, P.; Fransson, J.
2013-12-01
The recent experimental observations of designer Dirac fermions and topological phases in molecular graphene are addressed theoretically. Using scattering theory, we calculate the electronic structure of finite lattices of scattering centers dual to the honeycomb lattice. In good agreement with experimental observations, we obtain a V-shaped electron density of states around the Fermi energy. By varying the lattice parameter we simulate electron and hole doping of the structure, and by adding and removing scattering centers we simulate, respectively, vacancy and impurity defects. Specifically, for the vacancy defect we verify the emergence of a sharp resonance near the Fermi energy for increasing strength of the scattering potential.
Universal statistics of the scattering coefficient of chaotic microwave cavities.
Hemmady, Sameer; Zheng, Xing; Antonsen, Thomas M; Ott, Edward; Anlage, Steven M
2005-05-01
We consider the statistics of the scattering coefficient S of a chaotic microwave cavity coupled to a single port. We remove the nonuniversal effects of the coupling from the experimental S data using the radiation impedance obtained directly from the experiments. We thus obtain the normalized scattering coefficient whose probability density function (PDF) is predicted to be universal in that it depends only on the loss (quality factor) of the cavity. We compare experimental PDFs of the normalized scattering coefficients with those obtained from random matrix theory (RMT), and find excellent agreement. The results apply to scattering measurements on any wave chaotic system.
NASA Astrophysics Data System (ADS)
Liu, Yixiong; Yang, Ce; Song, Xiancheng
2015-04-01
A new airfoil shape parameterization method is developed, which extended the Bezier curve to the generalized form with adjustable shape parameters. The local control parameters at airfoil leading and trailing edge regions are enhanced, where have significant effect on the aerodynamic performance of wind turbine. The results show this improved parameterization method has advantages in the fitting characteristics of geometry shape and aerodynamic performance comparing with other three common airfoil parameterization methods. The new parameterization method is then applied to airfoil shape optimization for wind turbine using Genetic Algorithm (GA), and the wind turbine special airfoil, DU93-W-210, is optimized to achieve the favorable Cl/Cd at specified flow conditions. The aerodynamic characteristic of the optimum airfoil is obtained by solving the RANS equations in computational fluid dynamics (CFD) method, and the optimization convergence curves show that the new parameterization method has good convergence rate in less number of generations comparing with other methods. It is concluded that the new method not only has well controllability and completeness in airfoil shape representation and provides more flexibility in expressing the airfoil geometry shape, but also is capable to find efficient and optimal wind turbine airfoil. Additionally, it is shown that a suitable parameterization method is helpful for improving the convergence rate of the optimization algorithm.
Implementation of stochastic parameterization of surface variables in ensemble forecasts
NASA Astrophysics Data System (ADS)
Weidle, F.; Wang, Y.; Wittmann, C.; Tang, J.; Xia, F.
2012-04-01
To represent uncertainties in the numerical weather prediction, various ensemble forecasting systems have been recently developed by different weather services around the world. Most ensemble systems account for uncertainties in the initial conditions and for limited area ensemble systems also errors in the lateral boundary conditions are considered by using one or more global ensemble systems as coupling model. To account for uncertainties in the model a stochastic physic scheme is one possibility which is in use in some ensemble systems. In forecast models surface processes are typically treated by parameterizations to represent physical processes at the interface between surface and atmosphere. Especially for the forecast quality of near surface fields these processes can play a crucial role but also for the formation of precipitation in the atmosphere. However, uncertainties in the forecast due to errors in the representation of surface processes, which can feed back to atmospheric processes are not yet considered in any operational ensemble system. The aim of the work is to investigate whether the representation of uncertainties in the surface scheme of an ensemble system can improve the probabilistic forecast. Therefore a stochastic physics scheme for surface processes, similar to the stochastic physics scheme that has been used for several years in the ECMWF-EPS, is implemented in ALADIN-LAEF, the operational limited area ensemble model of the Austrian Weather service. The stochastic physic method is used to randomly perturb tendencies of surface variables, for example soil moisture and surface temperature in the surface scheme, during the forecast. In the presentation the implementation in ALADIN-LAEF is introduced and simulations of high impact weather situations as well as a verification over 3 months against forecasts without perturbing surface fields will be presented. To verify the importance of the representation of uncertainties due to surface
Mantle Dynamics Studied with Parameterized Prescription From Mineral Physics Database
NASA Astrophysics Data System (ADS)
Tosi, N.; Yuen, D.; Wentzcovich, R.; deKoker, N.
2012-04-01
The incorporation of important thermodynamic and transport properties into mantle convection models has taken a long time for the community to appreciate, even though it was first spurred by the high-pressure experimental work at Mainz a quarter of a century ago and the experimental work at Bayreuth and St. Louis. The two quantities whose effects have yet to be widely appreciated are thermal expansivity α and thermal conductivity k, which are shown to impact mantle dynamics and thermal history in more ways than geoscientists have previously imagined. We have constructed simple parameterization schemes, which are cast analytically for describing α and k over a wide range of temperatures and pressures corresponding to the Earth's mantle. This approach employs the thermodynamics data set drawn from the VLAB at the University of Minnesota based on first-principles density functional theory [1] and also recent laboratory data from the Bayreuth group [2]. Using analytical formulae to determine α and k increases the computational speed of the convection code with respect to employing pre-calculated look-up tables and allows us to sweep out a wide parameter space. Our results, which also incorporate temperature and pressure dependent viscosity show the following prominent features: 1) The temperature-dependence of α is important in the upper mantle. It enhances strongly the rising hot plumes and inhibits the cold downwellings, thus making subduction more difficult for young slabs. 2) The pressure dependence of α is dominant in the lower mantle. It focuses upwellings and speeds them up during their upward rise. 3) The temperature-dependence of the thermal conductivity helps to homogenize the lateral thermal anomalies in cold downwellings and helps to maintain the heat in the upwellings, thus, in concert with alpha, helps to encourage fast hot plumes. 4) The lattice thermal conductivity of post-perovskite plays an important role in heat-transfer in the lower mantle and
Rainfall droplet size distributions (DSD) parameterization: physics and sensibility
NASA Astrophysics Data System (ADS)
Cecchini, M. A.; Machado, L.
2014-12-01
The CHUVA project (Cloud processes of tHe main precipitation systems in Brazil: A contribUtion to cloud resolVing modeling and to the GPM (GlobAl Precipitation Measurement)) is a Brazillian experiment that aims to understand the several cloud processes that occur in different precipitating regimes. At present, the CHUVA project has conducted 6 field campaigns, the last one being in Manaus jointly with GoAmazon, IARA and ACRIDICON. The main focus of the present study is to bring into perspective the different characteristics of precipitation that reaches the surface in Brazil over several locations. To do so, disdrometer data is analyzed in detail, employing a Gamma fit for each DSD measurement which provides the respective parameters to be studied. Those are disposed in a 3D space, each axis corresponding to one parameter, and the patterns are analyzed. A correlation between the Gamma parameters is defined as a parametric surface that fits the observations with errors smaller than 10% and R2 greater than 0.95. In this way, one parameter can be estimated with respect to the other two, reducing the degrees of freedom of the problem from 3 to 2. As the 3 parameters are defined over this surface, it's possible to obtain a surface representing integral DSD properties such as rainfall intensity (RI). Sensibilities tests are conducted on this estimation and also on other DSD characteristics such as total droplet concentrations and mean mass-weighted diameter. It's shown that the DSD integral properties are generally very sensitive to the Gamma parameters. Nonetheless, the sensibility varies over the surface, being higher in a region where the parameters are not balanced (i.e. a relatively high value in one parameter and low values on the other two). It's suggested that any study proposing parameterization/estimation of DSD properties should be aware of this region of high sensitivity. To further the collaboration with GoAmazon and ACRIDICON, the disdrometer results
Selection and parameterization of cortical neurons for neuroprosthetic control
NASA Astrophysics Data System (ADS)
Wahnoun, Remy; He, Jiping; Helms Tillery, Stephen I.
2006-06-01
When designing neuroprosthetic interfaces for motor function, it is crucial to have a system that can extract reliable information from available neural signals and produce an output suitable for real life applications. Systems designed to date have relied on establishing a relationship between neural discharge patterns in motor cortical areas and limb movement, an approach not suitable for patients who require such implants but who are unable to provide proper motor behavior to initially tune the system. We describe here a method that allows rapid tuning of a population vector-based system for neural control without arm movements. We trained highly motivated primates to observe a 3D center-out task as the computer played it very slowly. Based on only 10-12 s of neuronal activity observed in M1 and PMd, we generated an initial mapping between neural activity and device motion that the animal could successfully use for neuroprosthetic control. Subsequent tunings of the parameters led to improvements in control, but the initial selection of neurons and estimated preferred direction for those cells remained stable throughout the remainder of the day. Using this system, we have observed that the contribution of individual neurons to the overall control of the system is very heterogeneous. We thus derived a novel measure of unit quality and an indexing scheme that allowed us to rate each neuron's contribution to the overall control. In offline tests, we found that fewer than half of the units made positive contributions to the performance. We tested this experimentally by having the animals control the neuroprosthetic system using only the 20 best neurons. We found that performance in this case was better than when the entire set of available neurons was used. Based on these results, we believe that, with careful task design, it is feasible to parameterize control systems without any overt behaviors and that subsequent control system design will be enhanced with
NASA Astrophysics Data System (ADS)
Berg, L. K.; Shrivastava, M.; Easter, R. C.; Fast, J. D.; Chapman, E. G.; Liu, Y.; Ferrare, R. A.
2015-02-01
A new treatment of cloud effects on aerosol and trace gases within parameterized shallow and deep convection, and aerosol effects on cloud droplet number, has been implemented in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) version 3.2.1 that can be used to better understand the aerosol life cycle over regional to synoptic scales. The modifications to the model include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain-Fritsch (KF) cumulus parameterization that has been modified to better represent shallow convective clouds. Testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS). The simulation results are used to investigate the impact of cloud-aerosol interactions on regional-scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column-integrated BC can be as large as -50% when cloud-aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate under non-precipitating conditions due to sulfate production in the parameterized clouds. The modifications to WRF-Chem are found to account for changes in the cloud droplet number concentration (CDNC) and changes in the chemical composition of cloud droplet residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to the latest version of WRF-Chem, and it is anticipated
NASA Astrophysics Data System (ADS)
Wróbel, Iwona; Piskozub, Jacek
2016-04-01
Wind speed has a disproportionate role in the forming of the climate as well it is important part in calculate of the air-sea interaction thanks which we can study climate change. It influences on mass, momentum and energy fluxes and the standard way of parametrizing those fluxes is use this variable. However, the very functions used to calculate fluxes from winds have evolved over time and still have large differences (especially in the case of aerosol sources function). As we have shown last year at the EGU conference (PICO presentation EGU2015-11206-1) and in recent public article (OSD 12,C1262-C1264,2015) there is a lot of uncertainties in the case of air-sea CO2 fluxes. In this study we calculated regional and global mass and momentum fluxes based on several wind speed climatologies. To do this we use wind speed from satellite data in FluxEngine software created within OceanFlux GHG Evolution project. Our main area of interest is European Arctic because of the interesting air-sea interaction physics (six-monthly cycle, strong wind and ice cover) but because of better data coverage we have chosen the North Atlantic as a study region to make it possible to compare the calculated fluxes to measured ones. An additional reason was the importance of the area for the North Hemisphere climate, and especially for Europe. The study is related to an ESA funded OceanFlux GHG Evolution project and is meant to be part of a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). We have used a modified version FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) for calculating trace gas fluxes to derive two purely wind driven (at least in the simplified form used in their parameterizations) fluxes. The modifications included removing gas transfer velocity formula from the toolset and replacing it with the respective formulas for momentum transfer and mass (aerosol production
Algorithmic scatter correction in dual-energy digital mammography
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei
2013-11-15
Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In this paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of
Gao, Weigang; Wesely, M.L.
1994-01-01
The removal of gaseous substances from the atmosphere by dry deposition represents an important sink in the atmospheric budget for many trace gases. The surface removal rate, therefore, needs be described quantitatively in modeling atmospheric transport and chemistry with regional- and global-scale models. Because the uptake capability of a terrestrial surface is strongly influenced by the type and condition of its vegetation, the seasonal and spatial changes in vegetation should be described in considerable detail in large-scale models. The objective of the present study is to develop a model that links remote sensing data from satellites with the RADM dry deposition module to provide a parameterization of dry deposition over large scales with improved temporal and spatial coverage. This paper briefly discusses the modeling methods and initial results obtained by applying the improved dry deposition module to a tallgrass prairie, for which measurements of O{sub 3} dry deposition and simultaneously obtained satellite remote sensing data are available.
NASA Astrophysics Data System (ADS)
Wiston, Modise; McFiggans, Gordon; Schultz, David
2015-04-01
In this study, we perform a simulation of the spatial distributions of particle and gas concentrations from a significantly large source of pollution event during a dry season in southern Africa and their interactions with cloud processes. Specific focus is on the extent to which cloud-aerosol interactions are affected by various inputs (i.e. emissions) and parameterizations and feedback mechanisms in a coupled mesoscale chemistry-meteorology model -herein Weather Research and Forecasting model with chemistry (WRF-Chem). The southern African dry season (May-Sep) is characterised by biomass burning (BB) type of pollution. During this period, BB particles are frequently observed over the subcontinent, at the same time a persistent deck of stratocumulus covers the south West African coast, favouring long-range transport over the Atlantic Ocean of aerosols above clouds. While anthropogenic pollutants tend to spread more over the entire domain, biomass pollutants are concentrated around the burning areas, especially the savannah and tropical rainforest of the Congo Basin. BB is linked to agricultural practice at latitudes south of 10° N. During an intense burning event, there is a clear signal of strong interactions of aerosols and cloud microphysics. These species interfere with the radiative budget, and directly affect the amount of solar radiation reflected and scattered back to space and partly absorbed by the atmosphere. Aerosols also affect cloud microphysics by acting as cloud condensation nuclei (CCN), modifying precipitation pattern and the cloud albedo. Key area is to understand the role of pollution on convective cloud processes and its impacts on cloud dynamics. The hypothesis is that an environment of potentially high pollution enables the probability of interactions between co-located aerosols and cloud layers. To investigate this hypothesis, we outline an approach to integrate three elements: i) focusing on regime(s) where there are strong indications of
A Fast Radiative Transfer Parameterization Under Cloudy Condition in Solar Spectral Region
NASA Astrophysics Data System (ADS)
Yang, Q.; Liu, X.; Yang, P.; Wang, C.
2014-12-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) system, which is proposed and developed by NASA, will directly measure the Earth's thermal infrared spectrum (IR), the spectrum of solar radiation reflected by the Earth and its atmosphere (RS), and radio occultation (RO). IR, RS, and RO measurements provide information on the most critical but least understood climate forcings, responses, and feedbacks associated with the vertical distribution of atmospheric temperature and water vapor, broadband reflected and emitted radiative fluxes, cloud properties, surface albedo, and surface skin temperature. To perform Observing System Simulation Experiments (OSSE) for long term climate observations, accurate and fast radiative transfer models are needed. The principal component-based radiative transfer model (PCRTM) is one of the efforts devoted to the development of fast radiative transfer models for simulating radiances and reflecatance observed by various hyperspectral instruments. Retrieval algorithm based on PCRTM forward model has been developed for AIRS, NAST, IASI, and CrIS. It is very fast and very accurate relative to the training radiative transfer model. In this work, we are extending PCRTM to UV-VIS-near IR spectral region. To implement faster cloudy radiative transfer calculations, we carefully investigated the radiative transfer process under cloud condition. The cloud bidirectional reflectance was parameterized based on off-line 36-stream multiple scattering calculations while few other lookup tables were generated to describe the effective transmittance and reflectance of the cloud-clear-sky coupling system in solar spectral region. The bidirectional reflectance or the irradiance measured by satellite may be calculated using a simple fast radiative transfer model providing the type of cloud (ice or water), optical depth of the cloud, optical depth of both atmospheric trace gases above and below clouds, particle size of the cloud, as well
NASA Astrophysics Data System (ADS)
Zhang, Jicai; Lu, Xianqing; Wang, Ping; Wang, Ya Ping
2011-04-01
Data assimilation technique (adjoint method) is applied to study the similarities and the differences between the Ekman (linear) and the Quadratic (nonlinear) bottom friction parameterizations for a two-dimensional tidal model. Two methods are used to treat the bottom friction coefficient (BFC). The first method assumes that the BFC is a constant in the entire computation domain, while the second applies the spatially varying BFCs. The adjoint expressions for the linear and the nonlinear parameterizations and the optimization formulae for the two BFC methods are derived based on the typical Largrangian multiplier method. By assimilating the model-generated 'observations', identical twin experiments are performed to test and validate the inversion ability of the presented methodology. Four experiments, which employ the linear parameterization, the nonlinear parameterizations, the constant BFC and the spatially varying BFC, are carried out to simulate the M 2 tide in the Bohai Sea and the Yellow Sea by assimilating the TOPEX/Poseidon altimetry and tidal gauge data. After the assimilation, the misfit between model-produced and observed data is significantly decreased in the four experiments. The simulation results indicate that the nonlinear Quadratic parameterization is more accurate than the linear Ekman parameterization if the traditional constant BFC is used. However, when the spatially varying BFCs are used, the differences between the Ekman and the Quadratic approaches diminished, the reason of which is analyzed from the viewpoint of dissipation rate caused by bottom friction. Generally speaking, linear bottom friction parameterizations are often used in global tidal models. This study indicates that they are also applicable in regional ocean tidal models with the combination of spatially varying parameters and the adjoint method.
NASA Astrophysics Data System (ADS)
Decloedt, Thomas; Luther, Douglas S.
2012-11-01
The spatial distributions of the diapycnal diffusivity predicted by two abyssal mixing schemes are compared to each other and to observational estimates based on microstructure surveys and large-scale hydrographic inversions. The parameterizations considered are the tidal mixing scheme by Jayne, St. Laurent and co-authors (JSL01) and the Roughness Diffusivity Model (RDM) by Decloedt and Luther. Comparison to microstructure surveys shows that both parameterizations are conservative in estimating the vertical extent to which bottom-intensified mixing penetrates into the stratified water column. In particular, the JSL01 exponential vertical structure function with fixed scale height decays to background values much nearer topography than observed. JSL01 and RDM yield dramatically different horizontal spatial distributions of diapycnal diffusivity, which would lead to quite different circulations in OGCMs, yet they produce similar basin-averaged diffusivity profiles. Both parameterizations are shown to yield smaller basin-mean diffusivity profiles than hydrographic inverse estimates for the major ocean basins, by factors ranging from 3 up to over an order of magnitude. The canonical 10-4 m2 s-1abyssal diffusivity is reached by the parameterizations only at depths below 3 km. Power consumption by diapycnal mixing below 1 km of depth, between roughly 32°S and 48°N, for the RDM and JSL01 parameterizations is 0.40 TW & 0.28 TW, respectively. The results presented here suggest that present-day mixing parameterizations significantly underestimate abyssal mixing. In conjunction with other recently published studies, a plausible interpretation is that parameterizing the dissipation of bottom-generated internal waves is not sufficient to approximate the global spatial distribution of diapycnal mixing in the abyssal ocean.
Development of embedded modulated scatterer technique: Single- and dual-loaded scatterers
NASA Astrophysics Data System (ADS)
Donnell, Kristen Marie
Health monitoring of infrastructure is an important ongoing issue. Therefore, it is important that a cost-effective and practical method for evaluating complex composite structures be developed. A promising microwave-based embedded sensor technology is developed based on the Modulated Scatterer Technique (MST). MST is based on illuminating a probe, commonly a dipole antenna loaded with a PIN diode (also referred to as a single-loaded scatterer, or SLS), with an electromagnetic wave. This impinging wave induces a current along the scatterer length, which causes a scattered field to be reradiated. Modulating the PIN diode also modulates the signal scattered by the probe, resulting in two different states of the probe. By measuring this scattered field, information about the material in the vicinity of the probe may be determined. Using the ratio of both states of the probe removes the dependency of MST on several measurement parameters. In order to separate the scattered signal from reflections from other targets present in the total detected signal, a swept-frequency measurement process and subsequent Fourier Transform (time-gate method) was incorporated into MST. Additionally, a full electromagnetic study of the SLS, as applied to MST, was also conducted. The increased measurement complexity and data processing resulting from the time-gate method prompted the development of a novel dual-loaded scatterer (DLS) probe design, with four possible modulation states. By taking a differential ratio, the reflections from other targets can be effectively removed, while preserving the measurement parameter independence of the SLS ratio. A full electromagnetic derivation and analysis of the capabilities of the DLS as applied to MST is included in this investigation, as well as representative measurements using the DLS probe.
Cloud Simulations in Response to Turbulence Parameterizations in the GISS Model E GCM
NASA Technical Reports Server (NTRS)
Yao, Mao-Sung; Cheng, Ye
2013-01-01
The response of cloud simulations to turbulence parameterizations is studied systematically using the GISS general circulation model (GCM) E2 employed in the Intergovernmental Panel on Climate Change's (IPCC) Fifth Assessment Report (AR5).Without the turbulence parameterization, the relative humidity (RH) and the low cloud cover peak unrealistically close to the surface; with the dry convection or with only the local turbulence parameterization, these two quantities improve their vertical structures, but the vertical transport of water vapor is still weak in the planetary boundary layers (PBLs); with both local and nonlocal turbulence parameterizations, the RH and low cloud cover have better vertical structures in all latitudes due to more significant vertical transport of water vapor in the PBL. The study also compares the cloud and radiation climatologies obtained from an experiment using a newer version of turbulence parameterization being developed at GISS with those obtained from the AR5 version. This newer scheme differs from the AR5 version in computing nonlocal transports, turbulent length scale, and PBL height and shows significant improvements in cloud and radiation simulations, especially over the subtropical eastern oceans and the southern oceans. The diagnosed PBL heights appear to correlate well with the low cloud distribution over oceans. This suggests that a cloud-producing scheme needs to be constructed in a framework that also takes the turbulence into consideration.
On parameterization of the inverse problem for estimating aquifer properties using tracer data
Kowalsky, M. B.; Finsterle, Stefan A.; Williams, Kenneth H.; Murray, Christopher J.; Commer, Michael; Newcomer, Darrell R.; Englert, Andreas L.; Steefel, Carl I.; Hubbard, Susan
2012-06-11
We consider a field-scale tracer experiment conducted in 2007 in a shallow uranium-contaminated aquifer at Rifle, Colorado. In developing a reliable approach for inferring hydrological properties at the site through inverse modeling of the tracer data, decisions made on how to parameterize heterogeneity (i.e., how to represent a heterogeneous distribution using a limited number of parameters that are amenable to estimation) are of paramount importance. We present an approach for hydrological inversion of the tracer data and explore, using a 2D synthetic example at first, how parameterization affects the solution, and how additional characterization data could be incorporated to reduce uncertainty. Specifically, we examine sensitivity of the results to the configuration of pilot points used in a geostatistical parameterization, and to the sampling frequency and measurement error of the concentration data. A reliable solution of the inverse problem is found when the pilot point configuration is carefully implemented. In addition, we examine the use of a zonation parameterization, in which the geometry of the geological facies is known (e.g., from geophysical data or core data), to reduce the non-uniqueness of the solution and the number of unknown parameters to be estimated. When zonation information is only available for a limited region, special treatment in the remainder of the model is necessary, such as using a geostatistical parameterization. Finally, inversion of the actual field data is performed using 2D and 3D models, and results are compared with slug test data.
Albedo of coastal landfast sea ice in Prydz Bay, Antarctica: Observations and parameterization
NASA Astrophysics Data System (ADS)
Yang, Qinghua; Liu, Jiping; Leppäranta, Matti; Sun, Qizhen; Li, Rongbin; Zhang, Lin; Jung, Thomas; Lei, Ruibo; Zhang, Zhanhai; Li, Ming; Zhao, Jiechen; Cheng, Jingjing
2016-05-01
The snow/sea-ice albedo was measured over coastal landfast sea ice in Prydz Bay, East Antarctica (off Zhongshan Station) during the austral spring and summer of 2010 and 2011. The variation of the observed albedo was a combination of a gradual seasonal transition from spring to summer and abrupt changes resulting from synoptic events, including snowfall, blowing snow, and overcast skies. The measured albedo ranged from 0.94 over thick fresh snow to 0.36 over melting sea ice. It was found that snow thickness was the most important factor influencing the albedo variation, while synoptic events and overcast skies could increase the albedo by about 0.18 and 0.06, respectively. The in-situ measured albedo and related physical parameters (e.g., snow thickness, ice thickness, surface temperature, and air temperature) were then used to evaluate four different snow/ice albedo parameterizations used in a variety of climate models. The parameterized albedos showed substantial discrepancies compared to the observed albedo, particularly during the summer melt period, even though more complex parameterizations yielded more realistic variations than simple ones. A modified parameterization was developed, which further considered synoptic events, cloud cover, and the local landfast sea-ice surface characteristics. The resulting parameterized albedo showed very good agreement with the observed albedo.
Parameterized signal calibration for NMR cryoporometry experiment without external standard
NASA Astrophysics Data System (ADS)
Stoch, Grzegorz; Krzyżak, Artur T.
2016-08-01
In cryoporometric experiments non-linear effects associated with the sample and the probehead bring unwanted contributions to the total signal along with the change of temperature. The elimination of these influences often occurs with the help of an intermediate measurement of a separate liquid sample. In this paper we suggest an alternative approach under certain assumptions, solely based on data from the target experiment. In order to obtain calibration parameters the method uses all of these raw data points. Its reliability is therefore enhanced as compared to other methods based on lesser number of data points. Presented approach is automatically valid for desired temperature range. The need for intermediate measurement is removed and parameters for such a calibration are naturally adapted to the individual sample-probehead combination.
Limitations in scatter propagation
NASA Astrophysics Data System (ADS)
Lampert, E. W.
1982-04-01
A short description of the main scatter propagation mechanisms is presented; troposcatter, meteor burst communication and chaff scatter. For these propagation modes, in particular for troposcatter, the important specific limitations discussed are: link budget and resulting hardware consequences, diversity, mobility, information transfer and intermodulation and intersymbol interference, frequency range and future extension in frequency range for troposcatter, and compatibility with other services (EMC).
Landry, Guillaume; Seco, Joao; Gaudreault, Mathieu; Verhaegen, Frank
2013-10-01
tissue substitutes were well fitted by the TSM with R(2) = 0.9930. Residuals on Zeff for the phantoms were similar between the TSM and spectral methods for Zeff < 8 while they were improved by the TSM for higher Zeff. The RTM fitted the reference tissue dataset well with R(2) = 0.9999. Comparing the Zeff extracted from TSM and the more complex RTM to the known values from the reference tissue dataset yielded errors of up to 0.3 and 0.15 units of Zeff respectively. The parameterization approach yielded standard deviations which were up to 0.3 units of Zeff higher than those observed with the spectral method for Zeff around 7.5. Procedures for the DECT estimation of Zeff removing the need for estimates of the CT scanner spectra have been presented. Both the TSM and the more complex RTM performed better than the spectral method. The RTM yielded the best results for the reference human tissue dataset reducing errors from up to 0.3 to 0.15 units of Zeff compared to the simpler TSM. Both TSM and RTM are simpler to implement than the spectral method which requires estimates of the CT scanner spectra.
Neutron scattering and models : molybdenum.
Smith, A.B.
1999-05-26
A comprehensive interpretation of the fast-neutron interaction with elemental and isotopic molybdenum at energies of {le} 30 MeV is given. New experimental elemental-scattering information over the incident energy range 4.5 {r_arrow} 10 MeV is presented. Spherical, vibrational and dispersive models are deduced and discussed, including isospin, energy-dependent and mass effects. The vibrational models are consistent with the ''Lane potential''. The importance of dispersion effects is noted. Dichotomies that exist in the literature are removed. The models are vehicles for fundamental physical investigations and for the provision of data for applied purposes. A ''regional'' molybdenum model is proposed. Finally, recommendations for future work are made.
Aureolegraph internal scattering correction.
DeVore, John; Villanucci, Dennis; LePage, Andrew
2012-11-20
Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds.
Aureolegraph internal scattering correction.
DeVore, John; Villanucci, Dennis; LePage, Andrew
2012-11-20
Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds. PMID:23207299
Kokhanovsky, A A; Nakajima, T; Zege, E P
1998-07-20
We propose the physically based parameterization of the radiative characteristics of liquid-water clouds as functions of the wavelength, effective radius, and refractive index of particles, liquid-water path, ground albedo, and solar and observation angles. The formulas obtained are based on the approximate analytical solutions of the radiative transfer equation for optically thick, weakly absorbing layers and the geometrical optics approximation for local optical characteristics of cloud media. The accuracy of the approximate formulas was studied with an exact radiative transfer code. The relative error of the approximate formula for the reflection function at nadir observations was less then 15% for an optical thickness larger than 10 and a single-scattering albedo larger than 0.95.
Organic Aerosol Volatility Parameterizations and Their Impact on Atmospheric Composition and Climate
NASA Technical Reports Server (NTRS)
Tsigaridis, Kostas; Bauer, Susanne E.
2015-01-01
Despite their importance and ubiquity in the atmosphere, organic aerosols are still very poorly parameterized in global models. This can be explained by two reasons: first, a very large number of unconstrained parameters are involved in accurate parameterizations, and second, a detailed description of semi-volatile organics is computationally very expensive. Even organic aerosol properties that are known to play a major role in the atmosphere, namely volatility and aging, are poorly resolved in global models, if at all. Studies with different models and different parameterizations have not been conclusive on whether the additional complexity improves model simulations, but the added diversity of the different host models used adds an unnecessary degree of variability in the evaluation of results that obscures solid conclusions. Aerosol microphysics do not significantly alter the mean OA vertical profile or comparison with surface measurements. This might not be the case for semi-volatile OA with microphysics.
Organic Aerosol Volatility Parameterizations and Their Impact on Atmospheric Composition and Climate
NASA Technical Reports Server (NTRS)
Tsigaridis, Konsta; Bauer, Susanne E.
2015-01-01
Despite their importance and ubiquity in the atmosphere, organic aerosols are still very poorly parameterized in global models. This can be explained by two reasons: first, a very large number of unconstrained parameters are involved in accurate parameterizations, and second, a detailed description of semi-volatile organics is computationally very expensive. Even organic aerosol properties that are known to play a major role in the atmosphere, namely volatility and aging, are poorly resolved in global models, if at all. Studies with different models and different parameterizations have not been conclusive on whether the additional complexity improves model simulations, but the added diversity of the different host models used adds an unnecessary degree of variability in the evaluation of results that obscures solid conclusions.
A wave roughness Reynolds number parameterization of the sea spray source flux
NASA Astrophysics Data System (ADS)
Norris, Sarah J.; Brooks, Ian M.; Salisbury, Dominic J.
2013-08-01
of the sea spray aerosol source flux are derived as functions of wave roughness Reynolds numbers, RHa and RHw, for particles with radii between 0.176 and 6.61 µm at 80% relative humidity. These source functions account for up to twice the variance in the observations than does wind speed alone. This is the first such direct demonstration of the impact of wave state on the variability of sea spray aerosol production. Global European Centre for Medium-Range Weather Forecasts operational mode fields are used to drive the parameterizations. The source flux from the RH parameterizations varies from approximately 0.1 to 3 (RHa) and 5 (RHw) times that from a wind speed parameterization, derived from the same measurements, where the wave state is substantially underdeveloped or overdeveloped, respectively, compared to the equilibrium wave state at the local wind speed.
A second-order Budkyo-type parameterization of landsurface hydrology
NASA Technical Reports Server (NTRS)
Andreou, S. A.; Eagleson, P. S.
1982-01-01
A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.
Whys and Hows of the Parameterized Interval Analyses: A Guide for the Perplexed
NASA Astrophysics Data System (ADS)
Elishakoff, I.
2013-10-01
Novel elements of the parameterized interval analysis developed in [1, 2] are emphasized in this response, to Professor E.D. Popova, or possibly to others who may be perplexed by the parameterized interval analysis. It is also shown that the overwhelming majority of comments by Popova [3] are based on a misreading of our paper [1]. Partial responsibility for this misreading can be attributed to the fact that explanations provided in [1] were laconic. These could have been more extensive in view of the novelty of our approach [1, 2]. It is our duty, therefore, to reiterate, in this response, the whys and hows of parameterization of intervals, introduced in [1] to incorporate the possibly available information on dependencies between various intervals describing the problem at hand. This possibility appears to have been discarded by the standard interval analysis, which may, as a result, lead to overdesign, leading to the possible divorce of engineers from the otherwise beautiful interval analysis.
A review of recent research on improvement of physical parameterizations in the GLA GCM
NASA Technical Reports Server (NTRS)
Sud, Y. C.; Walker, G. K.
1990-01-01
A systematic assessment of the effect of a series of improvements in physical parameterizations of the Goddard Laboratory for Atmospheres (GLA) general circulation model (GCM) are summarized. The implementation of the Simple Biosphere Model (SiB) in the GCM is followed by a comparison of SiB GCM simulations with that of the earlier slab soil hydrology GCM (SSH-GCM) simulations. In the Sahelian context, the biogeophysical component of desertification was analyzed for SiB-GCM simulations. Cumulus parameterization is found to be the primary determinant of the organization of the simulated tropical rainfall of the GLA GCM using Arakawa-Schubert cumulus parameterization. A comparison of model simulations with station data revealed excessive shortwave radiation accompanied by excessive drying and heating to the land. The perpetual July simulations with and without interactive soil moisture shows that 30 to 40 day oscillations may be a natural mode of the simulated earth atmosphere system.
Exponential parameterization of neutrino mixing matrix with account of CP-violation data
NASA Astrophysics Data System (ADS)
Zhukovsky, Konstantin; Melazzini, Francisco
2016-08-01
The exponential parameterization of the Pontecorvo-Maki-Nakagawa-Sakata mixing matrix for neutrinos is discussed. The exponential form allows easy factorization and separate analysis of the CP-violating and Majorana terms. Based upon the recent experimental data on the neutrino mixing, the values for the exponential parameterization matrix for neutrinos are determined. The matrix entries for the pure rotational part in charge of the mixing without CP violation are derived. The complementarity hypothesis for quarks and neutrinos is demonstrated. A comparison of the results based on most recent and on old data is presented. The CP-violating parameter value is estimated, based on the so far imprecise experimental indications, regarding CP violation for neutrinos. The unitarity of the exponential parameterization and the CP-violating term transform is confirmed. The transform of the neutrino mass state vector by the exponential matrix with account of CP violation is shown.
Impact of a scale-aware cumulus parameterization in an operational NWP system modeling system
NASA Astrophysics Data System (ADS)
Chen, Baode; Yang, Yuhua; Wang, Xiaofeng
2014-05-01
To better understand the behavior of convective schemes across the grey zone, we carried out one-month (July of 2013) realtime-like experiment with an operational NWP system modeling system which includes the ADAS data assimilation scheme and WRF forecast model. The Grell-Freitas cumulus parameterization scheme, which is a scale-aware convective parameterization scheme and has been developed to better handle the transition in behavior of the sub-grid scale convective processes through the grey zone, was used in different resolution (15km, 9km and 3km) model set-up. Subjective and quantitative evaluations of the forecasts were conducted and the skills of the different experimental forecasts relatively to existing forecasting guidance were compared. A summary of the preliminary findings about the proportion of resolved vs unresolved physical processes in the gray zone will be presented along with a discussion of the potential operational impacts of the cumulus parameterization.
Developing a unified parameterization of diabatic heating for regional climate modeling simulations
NASA Astrophysics Data System (ADS)
Beltran-Przekurat, A. B.; Pielke, R. A., Sr.; Leoncini, G.; Gabriel, P.
2009-12-01
Conventionally, turbulence fluxes, short- and longwave radiative fluxes, and convective and stratiform cloud precipitation atmospheric processes are separately parameterized as a one-dimensional problem. Most of these physical effects occur at spatial scales too small to be explicitly resolved in the models. However, such a separation is not realistic as those processes are three-dimensional and interact with each other. Results from numerical weather prediction and climate models strongly suggest that subgrid-scale parameterizations represent a large source of model errors and sensitivities at a large computational cost. Improving the physical parameterizations and, in addition, reducing the fraction of the total computational time that they require is critical for improving the predictive skill of atmospheric models for both individual model realizations and for ensemble predictions. Our preliminary work presents a new methodology to incorporate parameterizations for use in atmospheric models. The effects of the parameterized physics on the diabatic heating and moistening/drying are incorporated into unified transfer functions, called Universal Look-Up Table (ULUT). The ULUT accepts as inputs the dependent variables and other information that are traditionally inserted into the parameterizations and produces the equivalent temperature and moisture changes that result from summing each parameterization. A similar concept using remotely-sensed data was proposed by Pielke Sr. et al. (2007) [Satellite-based model parameterization of diabatic heating. EOS, 88, 96-97].The major goal is to create a ULUT for the diabatic heating that would be able to reproduce the meteorological fields with the same accuracy as in the original model configuration but at a fraction of the cost. This effort is similar, although much broader in scope, to that of Leoncini et al. (2008, From model based parameterizations to Lookup Tables: An EOF approach. Wea. Forecasting, 23, 1127
The Parameterization of Solid Metal-Liquid Metal Partitioning of Siderophile Elements
NASA Technical Reports Server (NTRS)
Chabot, N. L.; Jones, J. H.
2003-01-01
The composition of a metallic liquid can significantly affect the partitioning behavior of elements. For example, some experimental solid metal-liquid metal partition coefficients have been shown to increase by three orders of magnitude with increasing S-content of the metallic liquid. Along with S, the presence of other light elements, such as P and C, has also been demonstrated to affect trace element partitioning behavior. Understanding the effects of metallic composition on partitioning behavior is important for modeling the crystallization of magmatic iron meteorites and the chemical effects of planetary differentiation. It is thus useful to have a mathematical expression that parameterizes the partition coefficient as a function of the composition of the metal. Here we present a revised parameterization method, which builds on the theory of the current parameterization of Jones and Malvin and which better handles partitioning in multi-light-element systems.
A numerical method for parameterization of atmospheric chemistry - Computation of tropospheric OH
NASA Technical Reports Server (NTRS)
Spivakovsky, C. M.; Wofsy, S. C.; Prather, M. J.
1990-01-01
An efficient and stable computational scheme for parameterization of atmospheric chemistry is described. The 24-hour-average concentration of OH is represented as a set of high-order polynomials in variables such as temperature, densities of H2O, CO, O3, and NO(t) (defined as NO + NO2 + NO3 + 2N2O5 + HNO2 + HNO4) as well as variables determining solar irradiance: cloud cover, density of the overhead ozone column, surface albedo, latitude, and solar declination. This parameterization of OH chemistry was used in the three-dimensional study of global distribution of CH3CCl3. The proposed computational scheme can be used for parameterization of rates of chemical production and loss or of any other output of a full chemical model.
Parameterization of the evaporation of rainfall for use in general circulation models
Feingold, G. )
1993-10-15
A parameterization of evaporation losses below cloud base is presented for use in general circulation models to assist in quantification of water content in the hydrological cycle. The scheme is based on detailed model calculations of the evolution of raindrop spectra below cloud base and includes the processes of collision coalescence/breakup. Evaporation is expressed as a percentage decrease in the liquid water ;mixing ratio, and the parameterization is formulated as an algebraic equation in (i) the cloud-base values o the mixing ratio and the drop concentration, (ii) the fall distance, and (iii) the lapse rate of temperature in the subcloud environment. Results show that when compared to the detailed model calculations, good estimates of evaporation (usually within 20% and often within 10%) are obtained for a wide range of conditions. An analysis of the errors in evaporation calculations associated with errors in the parameterization variables is performed. 29 refs., 34 figs., 3 tabs.
Parameterization of Forest Canopies with the PROSAIL Model
NASA Astrophysics Data System (ADS)
Austerberry, M. J.; Grigsby, S.; Ustin, S.
2013-12-01
Particularly in forested environments, arboreal characteristics such as Leaf Area Index (LAI) and Leaf Inclination Angle have a large impact on the spectral characteristics of reflected radiation. The reflected spectrum can be measured directly with satellites or airborne instruments, including the MASTER and AVIRIS instruments. This particular project dealt with spectral analysis of reflected light as measured by AVIRIS compared to tree measurements taken from the ground. Chemical properties of leaves including pigment concentrations and moisture levels were also measured. The leaf data was combined with the chemical properties of three separate trees, and served as input data for a sequence of simulations with the PROSAIL Model, a combination of PROSPECT and Scattering by Arbitrarily Inclined Leaves (SAIL) simulations. The output was a computed reflectivity spectrum, which corresponded to the spectra that were directly measured by AVIRIS for the three trees' exact locations within a 34-meter pixel resolution. The input data that produced the best-correlating spectral output was then cross-referenced with LAI values that had been obtained through two entirely separate methods, NDVI extraction and use of the Beer-Lambert law with airborne LiDAR. Examination with regressive techniques between the measured and modeled spectra then enabled a determination of the trees' probable structure and leaf parameters. Highly-correlated spectral output corresponded well to specific values of LAI and Leaf Inclination Angle. Interestingly, it appears that varying Leaf Angle Distribution has little or no noticeable effect on the PROSAIL model. Not only is the effectiveness and accuracy of the PROSAIL model evaluated, but this project is a precursor to direct measurement of vegetative indices exclusively from airborne or satellite observation.
Engelmann Spruce Site Index Models: A Comparison of Model Functions and Parameterizations
Nigh, Gordon
2015-01-01
Engelmann spruce (Picea engelmannii Parry ex Engelm.) is a high-elevation species found in western Canada and western USA. As this species becomes increasingly targeted for harvesting, better height growth information is required for good management of this species. This project was initiated to fill this need. The objective of the project was threefold: develop a site index model for Engelmann spruce; compare the fits and modelling and application issues between three model formulations and four parameterizations; and more closely examine the grounded-Generalized Algebraic Difference Approach (g-GADA) model parameterization. The model fitting data consisted of 84 stem analyzed Engelmann spruce site trees sampled across the Engelmann Spruce – Subalpine Fir biogeoclimatic zone. The fitted models were based on the Chapman-Richards function, a modified Hossfeld IV function, and the Schumacher function. The model parameterizations that were tested are indicator variables, mixed-effects, GADA, and g-GADA. Model evaluation was based on the finite-sample corrected version of Akaike’s Information Criteria and the estimated variance. Model parameterization had more of an influence on the fit than did model formulation, with the indicator variable method providing the best fit, followed by the mixed-effects modelling (9% increase in the variance for the Chapman-Richards and Schumacher formulations over the indicator variable parameterization), g-GADA (optimal approach) (335% increase in the variance), and the GADA/g-GADA (with the GADA parameterization) (346% increase in the variance). Factors related to the application of the model must be considered when selecting the model for use as the best fitting methods have the most barriers in their application in terms of data and software requirements. PMID:25853472
NASA Astrophysics Data System (ADS)
Hu, Deyong; Xing, Liwei; Huang, Shengli; Deng, Lei; Xu, Yingjun
2014-01-01
Aerodynamic roughness length (z0) is one of the important parameters that influence energy exchange at the land-atmosphere interface in numerical models, so it is of significance to accurately parameterize the land surface. To parameterize the z0 values of China's land surface vegetation using remote sensing data, we parameterized the vegetation canopy area index using the leaf area index and land cover products of moderate resolution imaging spectroradiometer data. Then we mapped the z0 values of different land cover types based on canopy area index and vegetation canopy height data. Finally, we analyzed the intra-annual monthly z0 values. The conclusions are: (1) This approach has been developed to parameterize large scale regional z0 values from multisource remote sensing data, allowing one to better model the land-atmosphere flux exchange based on this feasible and operational scheme. (2) The variation of z0 values in the parametric model is affected by the vegetation canopy area index and its threshold had been calculated to quantify different vegetation types. In general, the z0 value will increase during the growing season. When the threshold in the dense vegetation area or in the growing season is exceeded, the z0 values will decrease but the zero-plane displacement heights will increase. This technical scheme to parameterize the z0 can be applied to large-scale regions at a spatial resolution of 1 km, and the dynamic products of z0 can be used in high resolution land or atmospheric models to provide a useful scheme for land surface parameterization.
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Hälg, Roger A.; Baiocco, Giorgio; Lomax, Tony
2016-08-01
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.
Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony
2016-08-21
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has
NASA Astrophysics Data System (ADS)
Cooke, William; Donner, Leo
2015-04-01
Microphysical and aerosol processes determine the magnitude of climate forcing by aerosol-cloud interactions, are central aspects of cloud-climate feedback, and are important elements in weather systems for which accurate forecasting is a major goal of numerical weather prediction. Realistic simulation of these processes demands not only accurate microphysical and aerosol process representations but also realistic simulation of the vertical motions in which the aerosols and microphysics act. Aerosol activation, for example, is a strong function of vertical velocity. Cumulus parameterizations for climate and numerical weather prediction models have recently begun to include vertical velocities among the statistics they predict. These vertical velocities have been subject to only limited evaluation using observed vertical velocities. Deployments of multi-Doppler radars and dual-frequency profilers in recent field campaigns have substantially increased the observational base of cumulus vertical velocities, which for decades had been restricted mostly to GATE observations. Observations from TWP-ICE (Darwin, Australia) and MC3E (central United States) provide previously unavailable information on the vertical structure of cumulus vertical velocities and observations in differing synoptic contexts from those available in the past. They also provide an opportunity to independently evaluate cumulus parameterizations with vertical velocities tuned to earlier GATE observations. This presentation will compare vertical velocities observed in TWP-ICE and MC3E with cumulus vertical velocities using the parameterization in the GFDL CM3 climate model. Single-column results indicate parameterized vertical velocities are frequently greater than observed. Errors in parameterized vertical velocities exhibit similarities to vertical velocities explicitly simulated by cloud-system resolving models, and underlying issues in the treatment of microphysics may be important for both. The
Neutrons in proton pencil beam scanning: parameterization of energy, quality factors and RBE.
Schneider, Uwe; Hälg, Roger A; Baiocco, Giorgio; Lomax, Tony
2016-08-21
The biological effectiveness of neutrons produced during proton therapy in inducing cancer is unknown, but potentially large. In particular, since neutron biological effectiveness is energy dependent, it is necessary to estimate, besides the dose, also the energy spectra, in order to obtain quantities which could be a measure of the biological effectiveness and test current models and new approaches against epidemiological studies on cancer induction after proton therapy. For patients treated with proton pencil beam scanning, this work aims to predict the spatially localized neutron energies, the effective quality factor, the weighting factor according to ICRP, and two RBE values, the first obtained from the saturation corrected dose mean lineal energy and the second from DSB cluster induction. A proton pencil beam was Monte Carlo simulated using GEANT. Based on the simulated neutron spectra for three different proton beam energies a parameterization of energy, quality factors and RBE was calculated. The pencil beam algorithm used for treatment planning at PSI has been extended using the developed parameterizations in order to calculate the spatially localized neutron energy, quality factors and RBE for each treated patient. The parameterization represents the simple quantification of neutron energy in two energy bins and the quality factors and RBE with a satisfying precision up to 85 cm away from the proton pencil beam when compared to the results based on 3D Monte Carlo simulations. The root mean square error of the energy estimate between Monte Carlo simulation based results and the parameterization is 3.9%. For the quality factors and RBE estimates it is smaller than 0.9%. The model was successfully integrated into the PSI treatment planning system. It was found that the parameterizations for neutron energy, quality factors and RBE were independent of proton energy in the investigated energy range of interest for proton therapy. The pencil beam algorithm has
NASA Astrophysics Data System (ADS)
Titos, G.; Cazorla, A.; Zieger, P.; Andrews, E.; Lyamani, H.; Granados-Muñoz, M. J.; Olmo, F. J.; Alados-Arboledas, L.
2016-09-01
Knowledge of the scattering enhancement factor, f(RH), is important for an accurate description of direct aerosol radiative forcing. This factor is defined as the ratio between the scattering coefficient at enhanced relative humidity, RH, to a reference (dry) scattering coefficient. Here, we review the different experimental designs used to measure the scattering coefficient at dry and humidified conditions as well as the procedures followed to analyze the measurements. Several empirical parameterizations for the relationship between f(RH) and RH have been proposed in the literature. These parameterizations have been reviewed and tested using experimental data representative of different hygroscopic growth behavior and a new parameterization is presented. The potential sources of error in f(RH) are discussed. A Monte Carlo method is used to investigate the overall measurement uncertainty, which is found to be around 20-40% for moderately hygroscopic aerosols. The main factors contributing to this uncertainty are the uncertainty in RH measurement, the dry reference state and the nephelometer uncertainty. A literature survey of nephelometry-based f(RH) measurements is presented as a function of aerosol type. In general, the highest f(RH) values were measured in clean marine environments, with pollution having a major influence on f(RH). Dust aerosol tended to have the lowest reported hygroscopicity of any of the aerosol types studied. Major open questions and suggestions for future research priorities are outlined.
Matching Pursuit with Asymmetric Functions for Signal Decomposition and Parameterization
Spustek, Tomasz; Jedrzejczak, Wiesław Wiktor; Blinowska, Katarzyna Joanna
2015-01-01
The method of adaptive approximations by Matching Pursuit makes it possible to decompose signals into basic components (called atoms). The approach relies on fitting, in an iterative way, functions from a large predefined set (called dictionary) to an analyzed signal. Usually, symmetric functions coming from the Gabor family (sine modulated Gaussian) are used. However Gabor functions may not be optimal in describing waveforms present in physiological and medical signals. Many biomedical signals contain asymmetric components, usually with a steep rise and slower decay. For the decomposition of this kind of signal we introduce a dictionary of functions of various degrees of asymmetry – from symmetric Gabor atoms to highly asymmetric waveforms. The application of this enriched dictionary to Otoacoustic Emissions and Steady-State Visually Evoked Potentials demonstrated the advantages of the proposed method. The approach provides more sparse representation, allows for correct determination of the latencies of the components and removes the "energy leakage" effect generated by symmetric waveforms that do not sufficiently match the structures of the analyzed signal. Additionally, we introduced a time-frequency-amplitude distribution that is more adequate for representation of asymmetric atoms than the conventional time-frequency-energy distribution. PMID:26115480
RedMDStream: Parameterization and Simulation Toolbox for Coarse-Grained Molecular Dynamics Models
Leonarski, Filip; Trylska, Joanna
2015-01-01
Coarse-grained (CG) models in molecular dynamics (MD) are powerful tools to simulate the dynamics of large biomolecular systems on micro- to millisecond timescales. However, the CG model, potential energy terms, and parameters are typically not transferable between different molecules and problems. So parameterizing CG force fields, which is both tedious and time-consuming, is often necessary. We present RedMDStream, a software for developing, testing, and simulating biomolecules with CG MD models. Development includes an automatic procedure for the optimization of potential energy parameters based on metaheuristic methods. As an example we describe the parameterization of a simple CG MD model of an RNA hairpin. PMID:25902423
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Saether, Erik; Phillips, Dawn R.; Yamakov, Vesselin
2006-01-01
A multiscale modeling strategy is developed to study grain boundary fracture in polycrystalline aluminum. Atomistic simulation is used to model fundamental nanoscale deformation and fracture mechanisms and to develop a constitutive relationship for separation along a grain boundary interface. The nanoscale constitutive relationship is then parameterized within a cohesive zone model to represent variations in grain boundary properties. These variations arise from the presence of vacancies, intersticies, and other defects in addition to deviations in grain boundary angle from the baseline configuration considered in the molecular dynamics simulation. The parameterized cohesive zone models are then used to model grain boundaries within finite element analyses of aluminum polycrystals.
Simple parameterized coordinate transformation method for deep- and smooth-profile gratings.
Xu, Xihong; Li, Lifeng
2014-12-01
A simple variable transformation that consists of two joined straight-line segments per grating period is proposed for the parameterized coordinate transformation method (the C method). With this bilinear parameterization, the C method can produce convergent numerical results for gratings of deep and smooth profiles with a groove depth-to-period ratio as high as 10, which to date has been far out of reach of the C method. The danger of getting divergent results due to inadvertently using an overly large truncation number is also practically eliminated.
Calculates Thermal Neutron Scattering Kernel.
1989-11-10
Version 00 THRUSH computes the thermal neutron scattering kernel by the phonon expansion method for both coherent and incoherent scattering processes. The calculation of the coherent part is suitable only for calculating the scattering kernel for heavy water.
Parameterization and analysis of 3-D radiative transfer in clouds
Varnai, Tamas
2012-03-16
This report provides a summary of major accomplishments from the project. The project examines the impact of radiative interactions between neighboring atmospheric columns, for example clouds scattering extra sunlight toward nearby clear areas. While most current cloud models don't consider these interactions and instead treat sunlight in each atmospheric column separately, the resulting uncertainties have remained unknown. This project has provided the first estimates on the way average solar heating is affected by interactions between nearby columns. These estimates have been obtained by combining several years of cloud observations at three DOE Atmospheric Radiation Measurement (ARM) Climate Research Facility sites (in Alaska, Oklahoma, and Papua New Guinea) with simulations of solar radiation around the observed clouds. The importance of radiative interactions between atmospheric columns was evaluated by contrasting simulations that included the interactions with those that did not. This study provides lower-bound estimates for radiative interactions: It cannot consider interactions in cross-wind direction, because it uses two-dimensional vertical cross-sections through clouds that were observed by instruments looking straight up as clouds drifted aloft. Data from new DOE scanning radars will allow future radiative studies to consider the full three-dimensional nature of radiative processes. The results reveal that two-dimensional radiative interactions increase overall day-and-night average solar heating by about 0.3, 1.2, and 4.1 Watts per meter square at the three sites, respectively. This increase grows further if one considers that most large-domain cloud simulations have resolutions that cannot specify small-scale cloud variability. For example, the increases in solar heating mentioned above roughly double for a fairly typical model resolution of 1 km. The study also examined the factors that shape radiative interactions between atmospheric columns and
NASA Astrophysics Data System (ADS)
Piskozub, Jacek; Wróbel, Iwona
2016-04-01
The North Atlantic is a crucial region for both ocean circulation and the carbon cycle. Most of ocean deep waters are produced in the basin making it a large CO2 sink. The region, close to the major oceanographic centres has been well covered with cruises. This is why we have performed a study of net CO2 flux dependence upon the choice of gas transfer velocity k parameterization for this very region: the North Atlantic including European Arctic Seas. The study has been a part of a ESA funded OceanFlux GHG Evolution project and, at the same time, a PhD thesis (of I.W) funded by Centre of Polar Studies "POLAR-KNOW" (a project of the Polish Ministry of Science). Early results have been presented last year at EGU 2015 as a PICO presentation EGU2015-11206-1. We have used FluxEngine, a tool created within an earlier ESA funded project (OceanFlux Greenhouse Gases) to calculate the North Atlantic and global fluxes with different gas transfer velocity formulas. During the processing of the data, we have noticed that the North Atlantic results for different k formulas are more similar (in the sense of relative error) that global ones. This was true both for parameterizations using the same power of wind speed and when comparing wind squared and wind cubed parameterizations. This result was interesting because North Atlantic winds are stronger than the global average ones. Was the flux result similarity caused by the fact that the parameterizations were tuned to the North Atlantic area where many of the early cruises measuring CO2 fugacities were performed? A closer look at the parameterizations and their history showed that not all of them were based on North Atlantic data. Some of them were tuned to the South Ocean with even stronger winds while some were based on global budgets of 14C. However we have found two reasons, not reported before in the literature, for North Atlantic fluxes being more similar than global ones for different gas transfer velocity parametrizations
Parameterizations of Cloud Microphysics and Indirect Aerosol Effects
Tao, Wei-Kuo
2014-05-19
/hail. Each type is described by a special size distribution function containing 33 categories (bins). Atmospheric aerosols are also described using number density size-distribution functions (containing 33 bins). Droplet nucleation (activation) is derived from the analytical calculation of super-saturation, which is used to determine the sizes of aerosol particles to be activated and the corresponding sizes of nucleated droplets. Primary nucleation of each type of ice crystal takes place within certain temperature ranges. A detailed description of these explicitly parameterized processes can be found in Khain and Sednev (1996) and Khain et al. (1999, 2001). 2.3 Case Studies Three cases, a tropical oceanic squall system observed during TOGA COARE (Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment, which occurred over the Pacific Ocean warm pool from November 1992 to February 1993), a midlatitude continental squall system observed during PRESTORM (Preliminary Regional Experiment for STORM-Central, which occurred in Kansas and Oklahoma during May-June 1985), and mid-afternoon convection observed during CRYSTAL-FACE (Cirrus Regional Study of Tropical Anvils and Cirrus Layers – Florida Area Cumulus Experiment, which occurred in Florida during July 2002), will be used to examine the impact of aerosols on deep, precipitating systems. 3. SUMMARY of RESULTS • For all three cases, higher CCN produces smaller cloud droplets and a narrower spectrum. Dirty conditions delay rain formation, increase latent heat release above the freezing level, and enhance vertical velocities at higher altitude for all cases. Stronger updrafts, deeper mixed-phase regions, and more ice particles are simulated with higher CCN in good agreement with observations. • In all cases, rain reaches the ground early with lower CCN. Rain suppression is also evident in all three cases with high CCN in good agreement with observations (Rosenfeld, 1999, 2000 and others). Rain
Anthony Prenni; Kreidenweis, Sonia M.
2012-09-28
Clouds play an important role in weather and climate. In addition to their key role in the hydrologic cycle, clouds scatter incoming solar radiation and trap infrared radiation from the surface and lower atmosphere. Despite their importance, feedbacks involving clouds remain as one of the largest sources of uncertainty in climate models. To better simulate cloud processes requires better characterization of cloud microphysical processes, which can affect the spatial extent, optical depth and lifetime of clouds. To this end, we developed a new parameterization to be used in numerical models that describes the variation of ice nuclei (IN) number concentrations active to form ice crystals in mixed-phase (water droplets and ice crystals co-existing) cloud conditions as these depend on existing aerosol properties and temperature. The parameterization is based on data collected using the Colorado State University continuous flow diffusion chamber in aircraft and ground-based campaigns over a 14-year period, including data from the DOE-supported Mixed-Phase Arctic Cloud Experiment. The resulting relationship is shown to more accurately represent the variability of ice nuclei distributions in the atmosphere compared to currently used parameterizations based on temperature alone. When implemented in one global climate model, the new parameterization predicted more realistic annually averaged cloud water and ice distributions, and cloud radiative properties, especially for sensitive higher latitude mixed-phase cloud regions. As a test of the new global IN scheme, it was compared to independent data collected during the 2008 DOE-sponsored Indirect and Semi-Direct Aerosol Campaign (ISDAC). Good agreement with this new data set suggests the broad applicability of the new scheme for describing general (non-chemically specific) aerosol influences on IN number concentrations feeding mixed-phase Arctic stratus clouds. Finally, the parameterization was implemented into a regional
Riley, David G.; Gill, Clare A.; Herring, Andy D.; Riggs, Penny K.; Sawyer, Jason E.; Sanders, James O.
2014-01-01
Gestation length, birth weight, and weaning weight of F2 Nelore-Angus calves (n = 737) with designed extensive full-sibling and half-sibling relatedness were evaluated for association with 34,957 SNP markers. In analyses of birth weight, random relatedness was modeled three ways: 1) none, 2) random animal, pedigree-based relationship matrix, or 3) random animal, genomic relationship matrix. Detected birth weight-SNP associations were 1,200, 735, and 31 for those parameterizations respectively; each additional model refinement removed associations that apparently were a result of the built-in stratification by relatedness. Subsequent analyses of gestation length and weaning weight modeled genomic relatedness; there were 40 and 26 trait-marker associations detected for those traits, respectively. Birth weight associations were on BTA14 except for a single marker on BTA5. Gestation length associations included 37 SNP on BTA21, 2 on BTA27 and one on BTA3. Weaning weight associations were on BTA14 except for a single marker on BTA10. Twenty-one SNP markers on BTA14 were detected in both birth and weaning weight analyses. PMID:25249774
Cosmic Ray Scattering Radiography
NASA Astrophysics Data System (ADS)
Morris, C. L.
2015-12-01
Cosmic ray muons are ubiquitous, are highly penetrating, and can be used to measure material densities by either measuring the stopping rate or by measuring the scattering of transmitted muons. The Los Alamos team has studied scattering radiography for a number of applications. Some results will be shown of scattering imaging for a range of practical applications, and estimates will be made of the utility of scattering radiography for nondestructive assessments of large structures and for geological surveying. Results of imaging the core of the Toshiba Nuclear Critical Assembly (NCA) Reactor in Kawasaki, Japan and simulations of imaging the damaged cores of the Fukushima nuclear reactors will be presented. Below is an image made using muons of a core configuration for the NCA reactor.
NASA Astrophysics Data System (ADS)
Piwinski, A.
Intra-beam scattering is analysed and the rise times or damping times of the beam dimensions are derived. The theoretical results are compared with experimental values obtained on the CERN AA and SPS machines.
Environment scattering in GADRAS.
Thoreson, Gregory G.; Mitchell, Dean J; Theisen, Lisa Anne; Harding, Lee T.
2013-09-01
Radiation transport calculations were performed to compute the angular tallies for scattered gamma-rays as a function of distance, height, and environment. Greens Functions were then used to encapsulate the results a reusable transformation function. The calculations represent the transport of photons throughout scattering surfaces that surround sources and detectors, such as the ground and walls. Utilization of these calculations in GADRAS (Gamma Detector Response and Analysis Software) enables accurate computation of environmental scattering for a variety of environments and source configurations. This capability, which agrees well with numerous experimental benchmark measurements, is now deployed with GADRAS Version 18.2 as the basis for the computation of scattered radiation.
Rayleigh Scattering Diagnostics Workshop
NASA Technical Reports Server (NTRS)
Seasholtz, Richard (Compiler)
1996-01-01
The Rayleigh Scattering Diagnostics Workshop was held July 25-26, 1995 at the NASA Lewis Research Center in Cleveland, Ohio. The purpose of the workshop was to foster timely exchange of information and expertise acquired by researchers and users of laser based Rayleigh scattering diagnostics for aerospace flow facilities and other applications. This Conference Publication includes the 12 technical presentations and transcriptions of the two panel discussions. The first panel was made up of 'users' of optical diagnostics, mainly in aerospace test facilities, and its purpose was to assess areas of potential applications of Rayleigh scattering diagnostics. The second panel was made up of active researchers in Rayleigh scattering diagnostics, and its purpose was to discuss the direction of future work.
CONTINUOUS ROTATION SCATTERING CHAMBER
Verba, J.W.; Hawrylak, R.A.
1963-08-01
An evacuated scattering chamber for use in observing nuclear reaction products produced therein over a wide range of scattering angles from an incoming horizontal beam that bombards a target in the chamber is described. A helically moving member that couples the chamber to a detector permits a rapid and broad change of observation angles without breaching the vacuum in the chamber. Also, small inlet and outlet openings are provided whose size remains substantially constant. (auth)
NASA Technical Reports Server (NTRS)
Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.
1990-01-01
A 5-state close-coupling calculation (5s-5p-4d-6s-6p) was carried out for positron-Rb scattering in the energy range 3.7 to 28.0 eV. In contrast to the results of similar close-coupling calculations for positron-Na and positron-K scattering the (effective) total integrated cross section has an energy dependence which is contrary to recent experimental measurements.
Parameterization of unresolved obstacles in wave modelling: A source term approach
NASA Astrophysics Data System (ADS)
Mentaschi, L.; Pérez, J.; Besio, G.; Mendez, F. J.; Menendez, M.
2015-12-01
In the present work we introduce two source terms for the parameterization of energy dissipation due to unresolved obstacles in spectral wave models. The proposed approach differs from the classical one based on spatial propagation schemes because it provides a local representation of phenomena such as unresolved wave energy dissipation. This source term-based approach presents the advantage of decoupling unresolved obstacles parameterization from the spatial propagation scheme, allowing not to reformulate, reimplement and revalidate the parameterization of unresolved obstacles for each propagation scheme. Furthermore it opens the way to parameterizations of other unresolved sheltering effects like rotation and redistribution of wave energy over frequencies. Proposed source terms estimate respectively local energy dissipation and shadow effect due to unresolved obstacles. Source terms validation through synthetic case studies has been carried out, showing their ability in reproducing wave dynamics comparable to those of high resolution models. The analysis of high resolution stationary wave simulations may help to better diagnose and study the effects of unresolved obstacles, providing estimations of transparency coefficients for each spectral component and allowing to understand and model unresolved effects of rotation and redistribution of wave energy over frequencies.
Regularized kernel PCA for the efficient parameterization of complex geological models
NASA Astrophysics Data System (ADS)
Vo, Hai X.; Durlofsky, Louis J.
2016-10-01
The use of geological parameterization procedures enables high-fidelity geomodels to be represented in terms of relatively few variables. Such parameterizations are particularly useful when the subspace representation is constructed to implicitly capture the key geological features that appear in prior geostatistical realizations. In this case, the parameterization can be used very effectively within a data assimilation framework. In this paper, we extend and apply geological parameterization techniques based on kernel principal component analysis (KPCA) for the representation of complex geomodels characterized by non-Gaussian spatial statistics. KPCA involves the application of PCA in a high-dimensional feature space and the subsequent reverse mapping of the feature-space model back to physical space. This reverse mapping, referred to as the pre-image problem, can be challenging because it (formally) involves a nonlinear minimization. In this work, a new explicit pre-image procedure, which avoids many of the problems with existing approaches, is introduced. To achieve (ensemble-level) flow responses in close agreement with those from reference geostatistical realizations, a bound-constrained, regularized version of KPCA, referred to as R-KPCA, is also introduced. R-KPCA can be viewed as a post-processing of realizations generated using KPCA. The R-KPCA representation is incorporated into an adjoint-gradient-based data assimilation procedure, and its use for history matching a complex deltaic fan system is demonstrated. Matlab code for the KPCA and R-KPCA procedures is provided online as Supplementary Material.
IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION FOR FINE-SCALE SIMULATIONS
The Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) (Grell et al. 1994) has been modified to include an urban canopy parameterization (UCP) for fine-scale urban simulations ( 1 - km horizontal grid spacing ). The UCP accounts for dr...
Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures.
Ashworth, Jennifer C; Mehr, Marco; Buxton, Paul G; Best, Serena M; Cameron, Ruth E
2016-05-01
Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term "interconnectivity" often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design.
Evans, J.L.; Frank, W.M.; Young, G.S.
1996-04-01
Successful simulations of the global circulation and climate require accurate representation of the properties of shallow and deep convective clouds, stable-layer clouds, and the interactions between various cloud types, the boundary layer, and the radiative fluxes. Each of these phenomena play an important role in the global energy balance, and each must be parameterized in a global climate model. These processes are highly interactive. One major problem limiting the accuracy of parameterizations of clouds and other processes in general circulation models (GCMs) is that most of the parameterization packages are not linked with a common physical basis. Further, these schemes have not, in general, been rigorously verified against observations adequate to the task of resolving subgrid-scale effects. To address these problems, we are designing a new Integrated Cumulus Ensemble and Turbulence (ICET) parameterization scheme, installing it in a climate model (CCM2), and evaluating the performance of the new scheme using data from Atmospheric Radiation Measurement (ARM) Program Cloud and Radiation Testbed (CART) sites.
SPACS: A semi-empirical parameterization for isotopic spallation cross sections
NASA Astrophysics Data System (ADS)
Schmitt, C.; Schmidt, K.-H.; Kelić-Heil, A.
2014-12-01
A new semi-empirical parameterization for residue cross sections in spallation reactions is presented. The prescription named SPACS, for spallation cross sections, permits calculating the fragment production in proton- and neutron-induced collisions with light up to heavy non-fissile partners from the Fermi regime to ultra-relativistic energies. The model is fully analytical, based on a new parameterization of the mass yields, accounting for the dependence on bombarding energy. The formalism for the isobaric distribution consists of a commonly used functional form, borrowed from the empirical parameterization of fragmentation cross sections EPAX, with the observed suited adjustments for spallation, and extended to the charge-pickup channel. Structural and even-odd staggering related to the last stage of the primary-residue deexcitation process is additionally explicitly introduced with a new prescription. Calculations are benchmarked with recent data collected at GSI, Darmstadt as well as with previous measurements employing various techniques. The dependences observed experimentally on collision energy, reaction-partner mass, and proton-neutron asymmetry are well described. A fast analytical parameterization, such as SPACS, can be relevant to be implemented in complex simulations as used for practical issues at nuclear facilities and plants. Its predictive power also makes it useful for cross-section estimates in astrophysics and biophysics.
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrological models have become essential tools for environmental assessments. This study’s objective was to evaluate a best professional judgment (BPJ) parameterization of the Agricultural Policy and Environmental eXtender (APEX) model with soil-survey data against the calibrated model with either ...
Creating a parameterized model of a CMOS transistor with a gate of enclosed layout
NASA Astrophysics Data System (ADS)
Vinogradov, S. M.; Atkin, E. V.; Ivanov, P. Y.
2016-02-01
The method of creating a parameterized spice model of an N-channel transistor with a gate of enclosed layout is considered. Formulas and examples of engineering calculations for use of models in the computer-aided Design environment of Cadence Vitruoso are presented. Calculations are made for the CMOS technology with 180 nm design rules of the UMC.
A Dynamically Computed Convective Time Scale for the Kain–Fritsch Convective Parameterization Scheme
Many convective parameterization schemes define a convective adjustment time scale τ as the time allowed for dissipation of convective available potential energy (CAPE). The Kain–Fritsch scheme defines τ based on an estimate of the advective time period for deep con...
A configurable B-spline parameterization method for structural optimization of wing boxes
NASA Astrophysics Data System (ADS)
Yu, Alan Tao
2009-12-01
This dissertation presents a synthesis of methods for structural optimization of aircraft wing boxes. The optimization problem considered herein is the minimization of structural weight with respect to component sizes, subject to stress constraints. Different aspects of structural optimization methods representing the current state-of-the-art are discussed, including sequential quadratic programming, sensitivity analysis, parameterization of design variables, constraint handling, and multiple load treatment. Shortcomings of the current techniques are identified and a B-spline parameterization representing the structural sizes is proposed to address them. A new configurable B-spline parameterization method for structural optimization of wing boxes is developed that makes it possible to flexibly explore design spaces. An automatic scheme using different levels of B-spline parameterization configurations is also proposed, along with a constraint aggregation method in order to reduce the computational effort. Numerical results are compared to evaluate the effectiveness of the B-spline approach and the constraint aggregation method. To evaluate the new formulations and explore design spaces, the wing box of an airliner is optimized for the minimum weight subject to stress constraints under multiple load conditions. The new approaches are shown to significantly reduce the computational time required to perform structural optimization and to yield designs that are more realistic than existing methods.
Parameterization of the GPFARM-Range model for simulating rangeland productivity
Technology Transfer Automated Retrieval System (TEKTRAN)
One of the major limitations to rangeland model usage is the lack of parameter values appropriate for reliable simulations at different locations and times. In this chapter we seek to show how the GPFARM-Range, a rangeland model, which has been previously parameterized, tested and validated for the ...
NASA Astrophysics Data System (ADS)
Dremin, I. M.
2013-01-01
Colliding high-energy hadrons either produce new particles or scatter elastically with their quantum numbers conserved and no other particles produced. We consider the latter case here. Although inelastic processes dominate at high energies, elastic scattering contributes considerably (18-25%) to the total cross section. Its share first decreases and then increases at higher energies. Small-angle scattering prevails at all energies. Some characteristic features can be seen that provide information on the geometrical structure of the colliding particles and the relevant dynamical mechanisms. The steep Gaussian peak at small angles is followed by the exponential (Orear) regime with some shoulders and dips, and then by a power-law decrease. Results from various theoretical approaches are compared with experimental data. Phenomenological models claiming to describe this process are reviewed. The unitarity condition predicts an exponential fall for the differential cross section with an additional substructure to occur exactly between the low momentum transfer diffraction cone and a power-law, hard parton scattering regime under high momentum transfer. Data on the interference of the Coulomb and nuclear parts of amplitudes at extremely small angles provide the value of the real part of the forward scattering amplitude. The real part of the elastic scattering amplitude and the contribution of inelastic processes to the imaginary part of this amplitude (the so-called overlap function) are also discussed. Problems related to the scaling behavior of the differential cross section are considered. The power-law regime at highest momentum transfer is briefly described.
Effective Tree Scattering at L-Band
NASA Technical Reports Server (NTRS)
Kurum, Mehmet; ONeill, Peggy E.; Lang, Roger H.; Joseph, Alicia T.; Cosh, Michael H.; Jackson, Thomas J.
2011-01-01
For routine microwave Soil Moisture (SM) retrieval through vegetation, the tau-omega [1] model [zero-order Radiative Transfer (RT) solution] is attractive due to its simplicity and eases of inversion and implementation. It is the model used in baseline retrieval algorithms for several planned microwave space missions, such as ESA's Soil Moisture Ocean Salinity (SMOS) mission (launched November 2009) and NASA's Soil Moisture Active Passive (SMAP) mission (to be launched 2014/2015) [2 and 3]. These approaches are adapted for vegetated landscapes with effective vegetation parameters tau and omega by fitting experimental data or simulation outputs of a multiple scattering model [4-7]. The model has been validated over grasslands, agricultural crops, and generally light to moderate vegetation. As the density of vegetation increases, sensitivity to the underlying SM begins to degrade significantly and errors in the retrieved SM increase accordingly. The zero-order model also loses its validity when dense vegetation (i.e. forest, mature corn, etc.) includes scatterers, such as branches and trunks (or stalks in the case of corn), which are large with respect to the wavelength. The tau-omega model (when applied over moderately to densely vegetated landscapes) will need modification (in terms of form or effective parameterization) to enable accurate characterization of vegetation parameters with respect to specific tree types, anisotropic canopy structure, presence of leaves and/or understory. More scattering terms (at least up to first-order at L-band) should be included in the RT solutions for forest canopies [8]. Although not really suitable to forests, a zero-order tau-omega model might be applied to such vegetation canopies with large scatterers, but that equivalent or effective parameters would have to be used [4]. This requires that the effective values (vegetation opacity and single scattering albedo) need to be evaluated (compared) with theoretical definitions of
Technology Transfer Automated Retrieval System (TEKTRAN)
Simulation models can be used to make management decisions when properly parameterized. This study aimed to parameterize the ALMANAC (Agricultural Land Management Alternatives with Numerical Assessment Criteria) crop simulation model for dry bean in the semi-arid temperate areas of Mexico. The par...
NASA Astrophysics Data System (ADS)
Basarab, B.; Fuchs, B.; Rutledge, S. A.
2013-12-01
Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare
Sensitivity of the recent methane budget to LMDz sub-grid-scale physical parameterizations
NASA Astrophysics Data System (ADS)
Locatelli, R.; Bousquet, P.; Saunois, M.; Chevallier, F.; Cressot, C.
2015-09-01
With the densification of surface observing networks and the development of remote sensing of greenhouse gases from space, estimations of methane (CH4) sources and sinks by inverse modeling are gaining additional constraining data but facing new challenges. The chemical transport model (CTM) linking the flux space to methane mixing ratio space must be able to represent these different types of atmospheric constraints for providing consistent flux estimations. Here we quantify the impact of sub-grid-scale physical parameterization errors on the global methane budget inferred by inverse modeling. We use the same inversion setup but different physical parameterizations within one CTM. Two different schemes for vertical diffusion, two others for deep convection, and one additional for thermals in the planetary boundary layer (PBL) are tested. Different atmospheric methane data sets are used as constraints (surface observations or satellite retrievals). At the global scale, methane emissions differ, on average, from 4.1 Tg CH4 per year due to the use of different sub-grid-scale parameterizations. Inversions using satellite total-column mixing ratios retrieved by GOSAT are less impacted, at the global scale, by errors in physical parameterizations. Focusing on large-scale atmospheric transport, we show that inversions using the deep convection scheme of Emanuel (1991) derive smaller interhemispheric gradients in methane emissions, indicating a slower interhemispheric exchange. At regional scale, the use of different sub-grid-scale parameterizations induces uncertainties ranging from 1.2 % (2.7 %) to 9.4 % (14.2 %) of methane emissions when using only surface measurements from a background (or an extended) surface network. Moreover, spatial distribution of methane emissions at regional scale can be very different, depending on both the physical parameterizations used for the modeling of the atmospheric transport and the observation data sets used to constrain the inverse
NASA Astrophysics Data System (ADS)
Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.
2009-10-01
A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be
NASA Astrophysics Data System (ADS)
Rothenberg, D. A.; Wang, C.
2013-12-01
An important source contributing to uncertainty in simulations with global climate models arises from the influence of aerosols on cloud properties. These so-called aerosol indirect effects arise from a single coupling in the model, representing how aerosols activate and serve as cloud condensation nuclei and ultimately cloud droplets. While it is possible to build explicit numerical models which describe this process in detail, these class of tools are untenable for use in global climate models due to their complexity. Instead, physically- or empirically-based parameterizations of activation are used in their place to efficiently approximate cloud droplet nucleation as a function of a few meteorological and aerosol physical/chemical properties. As global climate models are outfitted with more complex, size- and mixing state-resolving aerosol models, activation parameterizations are increasingly called upon to handle aerosol populations against which their performance has not been explicitly benchmarked. Here, a simple scheme is proposed to evaluate the performance of activation parameterizations against a spectrum of mixing states, and two schemes commonly used in global models are studied using this framework. It is shown that each scheme exhibits systematic biases when a complex mixing state is present. To help resolve these issues, a new scheme is derived using Polynomial Chaos Expansion to build meta-models representing a full complexity parcel model. The meta-models are shown to accurately handle activation in both single-mode and mixture cases. In addition, a global sensitivity analysis is applied to benchmark the performance of the meta-models and the activation parameterizations against a detailed parcel model, and it is shown that the meta-models tend to more accurately attribute variability in activation dynamics to each input parameter and their interactions with others when compared to the physically-based parameterizations. A variety of experiments
Dam removal increases American eel abundance in distant headwater streams
Hitt, Nathaniel P.; Eyler, Sheila; Wofford, John E.B.
2012-01-01
American eel Anguilla rostrata abundances have undergone significant declines over the last 50 years, and migration barriers have been recognized as a contributing cause. We evaluated eel abundances in headwater streams of Shenandoah National Park, Virginia, to compare sites before and after the removal of a large downstream dam in 2004 (Embrey Dam, Rappahannock River). Eel abundances in headwater streams increased significantly after the removal of Embrey Dam. Observed eel abundances after dam removal exceeded predictions derived from autoregressive models parameterized with data prior to dam removal. Mann–Kendall analyses also revealed consistent increases in eel abundances from 2004 to 2010 but inconsistent temporal trends before dam removal. Increasing eel numbers could not be attributed to changes in local physical habitat (i.e., mean stream depth or substrate size) or regional population dynamics (i.e., abundances in Maryland streams or Virginia estuaries). Dam removal was associated with decreasing minimum eel lengths in headwater streams, suggesting that the dam previously impeded migration of many small-bodied individuals (<300 mm TL). We hypothesize that restoring connectivity to headwater streams could increase eel population growth rates by increasing female eel numbers and fecundity. This study demonstrated that dams may influence eel abundances in headwater streams up to 150 river kilometers distant, and that dam removal may provide benefits for eel management and conservation at the landscape scale.
Effect of physical parameterization schemes on track and intensity of cyclone LAILA using WRF model
NASA Astrophysics Data System (ADS)
Kanase, Radhika D.; Salvekar, P. S.
2015-08-01
The objective of the present study is to investigate in detail the sensitivity of cumulus parameterization (CP), planetary boundary layer (PBL) parameterization, microphysics parameterization (MP) on the numerical simulation of severe cyclone LAILA over Bay of Bengal using Weather Research & Forecasting (WRF) model. The initial and boundary conditions are supplied from GFS data of 1° × 1° resolution and the model is integrated in three `twoway' interactive nested domains at resolutions of 60 km, 20 km and 6.6 km. Total three sets of experiments are performed. First set of experiments include sensitivity of Cumulus Parameterization (CP) schemes, while second and third set of experiments is carried out to check the sensitivity of different PBL and Microphysics Parameterization (MP) schemes. The fourth set contains initial condition sensitivity experiments. For first three sets of experiments, 0000 UTC 17 May 2010 is used as initial condition. In CP sensitivity experiments, the track and intensity is well simulated by Betts-Miller-Janjic (BMJ) schemes. The track and intensity of LAILA is very sensitive to the representation of large scale environmental flow in CP scheme as well as to the initial vertical wind shear values. The intensity of the cyclone is well simulated by YSU scheme and it depends upon the mixing treatment in and above PBL. Concentration of frozen hydrometeors, such as graupel in WSM6 MP scheme and latent heat released during auto conversion of hydrometeors may be responsible for storm intensity. An additional set of experiments with different initial vortex intensity shows that, small differences in the initial wind fields have profound impact on both track and intensity of the cyclone. The representation of the mid-tropospheric heating in WSM6 is mainly controlled by amount of graupel hydrometeor and thus might be one of the possible causes in modulating the storm's intensity.
Parameterized reduced order models from a single mesh using hyper-dual numbers
NASA Astrophysics Data System (ADS)
Brake, M. R. W.; Fike, J. A.; Topping, S. D.
2016-06-01
In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.
Influence of ground scattering on satellite auroral observations.
Hays, P B; Anger, C D
1978-06-15
Satellite observations of the optical emission features in the aurora and nighttime airglow are usually contaminated by scattering from clouds and snow. It is shown here that this contamination can easily be removed when the emission layer is viewed against a surface of known albedo. The effect of the earth's curvature, parallax, and varying image angle are found to be significant but can be removed from the observation.
Influence of ground scattering on satellite auroral observations
NASA Technical Reports Server (NTRS)
Hays, P. B.; Anger, C. D.
1978-01-01
Satellite observations of the optical emission features in the aurora and nighttime airglow are usually contaminated by scattering from clouds and snow. It is shown here that this contamination can easily be removed when the emission layer is viewed against a surface of known albedo. The effect of the earth's curvature, parallax, and varying image angle are found to be significant but can be removed from the observation.
Migration of scattered teleseismic body waves
NASA Astrophysics Data System (ADS)
Bostock, M. G.; Rondenay, S.
1999-06-01
The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.
NASA Astrophysics Data System (ADS)
Schaetzel, Klaus
1989-08-01
Since the development of laser light sources and fast digital electronics for signal processing, the classical discipline of light scattering on liquid systems experienced a strong revival plus an enormous expansion, mainly due to new dynamic light scattering techniques. While a large number of liquid systems can be investigated, ranging from pure liquids to multicomponent microemulsions, this review is largely restricted to applications on Brownian particles, typically in the submicron range. Static light scattering, the careful recording of the angular dependence of scattered light, is a valuable tool for the analysis of particle size and shape, or of their spatial ordering due to mutual interactions. Dynamic techniques, most notably photon correlation spectroscopy, give direct access to particle motion. This may be Brownian motion, which allows the determination of particle size, or some collective motion, e.g., electrophoresis, which yields particle mobility data. Suitable optical systems as well as the necessary data processing schemes are presented in some detail. Special attention is devoted to topics of current interest, like correlation over very large lag time ranges or multiple scattering.
NASA Technical Reports Server (NTRS)
Schaetzel, Klaus
1989-01-01
Since the development of laser light sources and fast digital electronics for signal processing, the classical discipline of light scattering on liquid systems experienced a strong revival plus an enormous expansion, mainly due to new dynamic light scattering techniques. While a large number of liquid systems can be investigated, ranging from pure liquids to multicomponent microemulsions, this review is largely restricted to applications on Brownian particles, typically in the submicron range. Static light scattering, the careful recording of the angular dependence of scattered light, is a valuable tool for the analysis of particle size and shape, or of their spatial ordering due to mutual interactions. Dynamic techniques, most notably photon correlation spectroscopy, give direct access to particle motion. This may be Brownian motion, which allows the determination of particle size, or some collective motion, e.g., electrophoresis, which yields particle mobility data. Suitable optical systems as well as the necessary data processing schemes are presented in some detail. Special attention is devoted to topics of current interest, like correlation over very large lag time ranges or multiple scattering.
Fiber optic probe for light scattering measurements
Nave, Stanley E.; Livingston, Ronald R.; Prather, William S.
1995-01-01
A fiber optic probe and a method for using the probe for light scattering analyses of a sample. The probe includes a probe body with an inlet for admitting a sample into an interior sample chamber, a first optical fiber for transmitting light from a source into the chamber, and a second optical fiber for transmitting light to a detector such as a spectrophotometer. The interior surface of the probe carries a coating that substantially prevents non-scattered light from reaching the second fiber. The probe is placed in a region where the presence and concentration of an analyte of interest are to be detected, and a sample is admitted into the chamber. Exciting light is transmitted into the sample chamber by the first fiber, where the light interacts with the sample to produce Raman-scattered light. At least some of the Raman-scattered light is received by the second fiber and transmitted to the detector for analysis. Two Raman spectra are measured, at different pressures. The first spectrum is subtracted from the second to remove background effects, and the resulting sample Raman spectrum is compared to a set of stored library spectra to determine the presence and concentration of the analyte.
Light Scattering on the High Temperature Superconductors
NASA Astrophysics Data System (ADS)
Slakey, Francis
The high temperature superconductors have been examined by the technique of Raman scattering in several limits: the insulating phase, the normal and superconducting state of the superconducting phase, and an optically induced metastable phase. In all cases, the analysis and proposed phenomenological models involved either an examination of the inelastic background scattering or the phonon excitation spectrum. Specifically, the character, temperature dependence, critical temperature dependence and the copper-oxygen covalency dependence of the inelastic background scattering has been studied in all three phases. Analysis of the superconducting phase reveals a marginal Fermi-liquid like character of the electronic polarizability, and a decidedly non-traditional shift of the scattering intensity of the electronic excitations at low temperature. On removing oxygen, the system passes through a metal-insulator transition and the inelastic background becomes dominantly magnetic in origin. Examinations of the 'allowed' Raman active phonons in the superconducting phase expose a strong coupling of two modes to the background electronic excitation spectrum, and a dramatic renormalization of these modes below T _{rm c}. Further, two sharply resonant Raman 'forbidden' modes can be bleached out of the spectrum at low temperature with a sufficiently high laser dosage. A transition from this optically induced metastable state to the normal state occurs on warming the crystal back to room temperature. On reducing the oxygen concentration, the coupling strength of the two asymmetric phonons diminishes rapidly, the renormalization effects vanish, and the compound no longer exhibits metastability.
Fiber optic probe for light scattering measurements
Nave, S.E.; Livingston, R.R.; Prather, W.S.
1993-01-01
This invention is comprised of a fiber optic probe and a method for using the probe for light scattering analyses of a sample. The probe includes a probe body with an inlet for admitting a sample into an interior sample chamber, a first optical fiber for transmitting light from a source into the chamber, and a second optical fiber for transmitting light to a detector such as a spectrophotometer. The interior surface of the probe carries a coating that substantially prevents non-scattered light from reaching the second fiber. The probe is placed in a region where the presence and concentration of an analyte of interest are to be detected, and a sample is admitted into the chamber. Exciting light is transmitted into the sample chamber by the first fiber, where the light interacts with the sample to produce Raman-scattered light. At least some of the Raman- scattered light is received by the second fiber and transmitted to the detector for analysis. Two Raman spectra are measured, at different pressures. The first spectrum is subtracted from the second to remove background effects, and the resulting sample Raman spectrum is compared to a set of stored library spectra to determine the presence and concentration of the analyte.
NASA Astrophysics Data System (ADS)
Xia, X.; Che, H.; Zhu, J.; Chen, H.; Cong, Z.; Deng, X.; Fan, X.; Fu, Y.; Goloub, P.; Jiang, H.; Liu, Q.; Mai, B.; Wang, P.; Wu, Y.; Zhang, J.; Zhang, R.; Zhang, X.
2016-01-01
Spatio-temporal variation of aerosol optical properties and aerosol direct radiative effects (ADRE) are studied based on high quality aerosol data at 21 sunphotometer stations with at least 4-months worth of measurements in China mainland and Hong Kong. A parameterization is proposed to describe the relationship of ADREs to aerosol optical depth at 550 nm (AOD) and single scattering albedo at 550 nm (SSA). In the middle-east and south China, the maximum AOD is always observed in the burning season, indicating a significant contribution of biomass burning to AOD. Dust aerosols contribute to AOD significantly in spring and their influence decreases from the source regions to the downwind regions. The occurrence frequencies of background level AOD (AOD < 0.10) in the middle-east, south and northwest China are very limited (0.4%, 1.3% and 2.8%, respectively). However, it is 15.7% in north China. Atmosphere is pristine in the Tibetan Plateau where 92.0% of AODs are <0.10. Regional mean SSAs at 550 nm are 0.89-0.90, although SSAs show substantial site and season dependence. ADREs at the top and bottom of the atmosphere for solar zenith angle of 60 ± 5° are -16--37 W m-2 and -66--111 W m-2, respectively. ADRE efficiency shows slight regional dependence. AOD and SSA together account for more than 94 and 87% of ADRE variability at the bottom and top of the atmosphere. The overall picture of ADRE in China is that aerosols cool the climate system, reduce surface solar radiation and heat the atmosphere.
A new parameterization for ice cloud optical properties used in BCC-RAD and its radiative impact
NASA Astrophysics Data System (ADS)
Zhang, Hua; Chen, Qi; Xie, Bing
2015-01-01
A new parameterization of the solar and infrared optical properties of ice clouds that considers the multiple habits of ice particles was developed on the basis of a prescribed dataset. First, the fitting formulae of the bulk extinction coefficient, single-scatter albedo, asymmetry factor, and δ-function forward-peak factor at the given 65 wavelengths as a function of effective radius were created for common scenarios, which consider a greater number of wavelengths and are more accurate than those used previously. Then, the band-averaged volume extinction and absorption coefficients, asymmetry factor and forward-peak factor of ice cloud were derived for the BCC-RAD (Beijing Climate Center radiative transfer model) using a parameter reference table. Finally, the newly developed and the original schemes in the BCC-RAD and the commonly used Fu Scheme of ice cloud were all applied to the BCC-RAD. Their influences on radiation calculations were compared using the mid-latitude summer atmospheric profile with ice clouds under no-aerosol conditions, and produced a maximum difference of approximately 30.0 W/m2 for the radiative flux, and 4.0 K/d for the heating rate. Additionally, a sensitivity test was performed to investigate the impact of the ice crystal density on radiation calculations using the three schemes. The results showed that the maximum difference was 68.1 W/m2 for the shortwave downward radiative flux (for the case of perpendicular solar insolation), and 4.2 K/d for the longwave heating rate, indicating that the ice crystal density exerts a significant effect on radiation calculations for a cloudy atmosphere.
Phenol removal pretreatment process
Hames, Bonnie R.
2004-04-13
A process for removing phenols from an aqueous solution is provided, which comprises the steps of contacting a mixture comprising the solution and a metal oxide, forming a phenol metal oxide complex, and removing the complex from the mixture.
Krawiec, Donald F.; Kraf, Robert J.; Houser, Robert J.
1988-01-01
An apparatus for removing debris from a turbomachine. The apparatus includes housing and remotely operable viewing and grappling mechanisms for the purpose of locating and removing debris lodged between adjacent blades in a turbomachine.
... Remover Panscol Paplex Ultra PediaPatch Sal-Acid Sal-Plant Salacid Salactic Film Trans-Plantar Trans-Ver-Sal ... you to do so. Flush the eyes with water and remove any medicine that remains on the ...
Exploring Mechanisms of Biofilm Removal
Sahni, Karan; Khashai, Fatemeh; Forghany, Ali; Krasieva, Tatiana; Wilder-Smith, Petra
2016-01-01
exposed to air/water spray alone showed some disruption of the biofilm, leaving residual patches of biofilm that varied considerably in size. Test agent dip treatment followed by air/water spray broke up the continuous layer of biofilm leaving only very small, thin scattered islands of biofilm. Finally, the dynamic test agent spray followed by air/water spray removed the biofilm almost entirely, with evidence of only very few small, thin residual biofilm islands. Conclusion These studies demonstrate that test agent desiccant effect alone causes some disruption of dental biofilm. Additional dynamic rinsing is needed to achieve complete removal of dental biofilm. PMID:27413588
Electromagnetic scattering theory
NASA Technical Reports Server (NTRS)
Bird, J. F.; Farrell, R. A.
1986-01-01
Electromagnetic scattering theory is discussed with emphasis on the general stochastic variational principle (SVP) and its applications. The stochastic version of the Schwinger-type variational principle is presented, and explicit expressions for its integrals are considered. Results are summarized for scalar wave scattering from a classic rough-surface model and for vector wave scattering from a random dielectric-body model. Also considered are the selection of trial functions and the variational improvement of the Kirchhoff short-wave approximation appropriate to large size-parameters. Other applications of vector field theory discussed include a general vision theory and the analysis of hydromagnetism induced by ocean motion across the geomagnetic field. Levitational force-torque in the magnetic suspension of the disturbance compensation system (DISCOS), now deployed in NOVA satellites, is also analyzed using the developed theory.
ZALIZNYAK,I.A.; LEE,S.H.
2004-07-30
Much of our understanding of the atomic-scale magnetic structure and the dynamical properties of solids and liquids was gained from neutron-scattering studies. Elastic and inelastic neutron spectroscopy provided physicists with an unprecedented, detailed access to spin structures, magnetic-excitation spectra, soft-modes and critical dynamics at magnetic-phase transitions, which is unrivaled by other experimental techniques. Because the neutron has no electric charge, it is an ideal weakly interacting and highly penetrating probe of matter's inner structure and dynamics. Unlike techniques using photon electric fields or charged particles (e.g., electrons, muons) that significantly modify the local electronic environment, neutron spectroscopy allows determination of a material's intrinsic, unperturbed physical properties. The method is not sensitive to extraneous charges, electric fields, and the imperfection of surface layers. Because the neutron is a highly penetrating and non-destructive probe, neutron spectroscopy can probe the microscopic properties of bulk materials (not just their surface layers) and study samples embedded in complex environments, such as cryostats, magnets, and pressure cells, which are essential for understanding the physical origins of magnetic phenomena. Neutron scattering is arguably the most powerful and versatile experimental tool for studying the microscopic properties of the magnetic materials. The magnitude of the cross-section of the neutron magnetic scattering is similar to the cross-section of nuclear scattering by short-range nuclear forces, and is large enough to provide measurable scattering by the ordered magnetic structures and electron spin fluctuations. In the half-a-century or so that has passed since neutron beams with sufficient intensity for scattering applications became available with the advent of the nuclear reactors, they have became indispensable tools for studying a variety of important areas of modern science
Interstellar Dust Scattering Properties
NASA Astrophysics Data System (ADS)
Gordon, K. D.
2004-05-01
Studies of dust scattering properties in astrophysical objects with Milky Way interstellar dust are reviewed. Such objects are reflection nebulae, dark clouds, and the Diffuse Galactic Light (DGL). To ensure their basic quality, studies had to satisfy four basic criteria to be included in this review. These four criteria significantly reduced the scatter in dust properties measurements, especially in the case of the DGL. Determinations of dust scattering properties were found to be internally consistent for each object type as well as consistent between object types. The 2175 Å bump is seen as an absorption feature. Comparisons with dust grain models find general agreement with significant disagreements at particular wavelengths (especially in the far-ultraviolet). Finally, unanswered questions and future directions are enumerated.
NASA Astrophysics Data System (ADS)
Bahadur, Birendra
The following sections are included: * INTRODUCTION * CELL DESIGNING * EXPERIMENTAL OBSERVATIONS IN NEMATICS RELATED WITH DYNAMIC SCATTERING * Experimental Observations at D.C. Field and Electrode Effects * Experimental Observation at Low Frequency A.C. Fields * Homogeneously Aligned Nematic Regime * Williams Domains * Dynamic Scattering * Experimental Observation at High Frequency A.C. Field * Other Experimental Observations * THEORETICAL INTERPRETATIONS * Felici Model * Carr-Helfrich Model * D.C. Excitation * Dubois-Violette, de Gennes and Parodi Model * Low Freqency or Conductive Regime * High Frequency or Dielectric Regime * DYNAMIC SCATTERING IN SMECRIC A PHASE * ELECTRO-OPTICAL CHARACTERISTICS AND LIMITATIONS * Contrast Ratio vs. Voltage, Viewing Angle, Cell Gap, Wavelength and Temperature * Display Current vs. Voltage, Cell Gap and Temperature * Switching Time * Effect of Alignment * Effect of Conductivity, Temperature and Frequency * Addressing of DSM LCDs * Limitations of DSM LCDs * ACKNOWLEDGEMENTS * REFERENCES
Quaglioni, S; Navratil, P; Roth, R
2009-12-15
The exact treatment of nuclei starting from the constituent nucleons and the fundamental interactions among them has been a long-standing goal in nuclear physics. Above all nuclear scattering and reactions, which require the solution of the many-body quantum-mechanical problem in the continuum, represent an extraordinary theoretical as well as computational challenge for ab initio approaches.We present a new ab initio many-body approach which derives from the combination of the ab initio no-core shell model with the resonating-group method [4]. By complementing a microscopic cluster technique with the use of realistic interactions, and a microscopic and consistent description of the nucleon clusters, this approach is capable of describing simultaneously both bound and scattering states in light nuclei. We will discuss applications to neutron and proton scattering on sand light p-shell nuclei using realistic nucleon-nucleon potentials, and outline the progress toward the treatment of more complex reactions.
Impact of model structure and parameterization on Penman-Monteith type evaporation models
NASA Astrophysics Data System (ADS)
Ershadi, A.; McCabe, M. F.; Evans, J. P.; Wood, E. F.
2015-06-01
The impact of model structure and parameterization on the estimation of evaporation is investigated across a range of Penman-Monteith type models. To examine the role of model structure on flux retrievals, three different retrieval schemes are compared. The schemes include a traditional single-source Penman-Monteith model (Monteith, 1965), a two-layer model based on Shuttleworth and Wallace (1985) and a three-source model based on Mu et al. (2011). To assess the impact of parameterization choice on model performance, a number of commonly used formulations for aerodynamic and surface resistances were substituted into the different formulations. Model response to these changes was evaluated against data from twenty globally distributed FLUXNET towers, representing a cross-section of biomes that include grassland, cropland, shrubland, evergreen needleleaf forest and deciduous broadleaf forest. Scenarios based on 14 different combinations of model structure and parameterization were ranked based on their mean value of Nash-Sutcliffe Efficiency. Results illustrated considerable variability in model performance both within and between biome types. Indeed, no single model consistently outperformed any other when considered across all biomes. For instance, in grassland and shrubland sites, the single-source Penman-Monteith model performed the best. In croplands it was the three-source Mu model, while for evergreen needleleaf and deciduous broadleaf forests, the Shuttleworth-Wallace model rated highest. Interestingly, these top ranked scenarios all shared the simple lookup-table based surface resistance parameterization of Mu et al. (2011), while a more complex Jarvis multiplicative method for surface resistance produced lower ranked simulations. The highly ranked scenarios mostly employed a version of the Thom (1975) formulation for aerodynamic resistance that incorporated dynamic values of roughness parameters. This was true for all cases except over deciduous broadleaf
NASA Astrophysics Data System (ADS)
Li, F.; Zeng, X. D.; Levis, S.
2012-07-01
A process-based fire parameterization of intermediate complexity has been developed for global simulations in the framework of a Dynamic Global Vegetation Model (DGVM) in an Earth System Model (ESM). Burned area in a grid cell is estimated by the product of fire counts and average burned area of a fire. The scheme comprises three parts: fire occurrence, fire spread, and fire impact. In the fire occurrence part, fire counts rather than fire occurrence probability are calculated in order to capture the observed high burned area fraction in areas of high fire frequency and realize parameter calibration based on MODIS fire counts product. In the fire spread part, post-fire region of a fire is assumed to be elliptical in shape. Mathematical properties of ellipses and some mathematical derivations are applied to improve the equation and assumptions of an existing fire spread parameterization. In the fire impact part, trace gas and aerosol emissions due to biomass burning are estimated, which offers an interface with atmospheric chemistry and aerosol models in ESMs. In addition, flexible time-step length makes the new fire parameterization easily applied to various DGVMs. Global performance of the new fire parameterization is assessed by using an improved version of the Community Land Model version 3 with the Dynamic Global Vegetation Model (CLM-DGVM). Simulations are compared against the latest satellite-based Global Fire Emission Database version 3 (GFED3) for 1997-2004. Results show that simulated global totals and spatial patterns of burned area and fire carbon emissions, regional totals and spreads of burned area, global annual burned area fractions for various vegetation types, and interannual variability of burned area are reasonable, and closer to GFED3 than CLM-DGVM simulations with the commonly used Glob-FIRM fire parameterization and the old fire module of CLM-DGVM. Furthermore, average error of simulated trace gas and aerosol emissions due to biomass burning
Berg, L. K.; Shrivastava, M.; Easter, R. C.; Fast, J. D.; Chapman, E. G.; Liu, Y.; Ferrare, R. A.
2015-02-24
A new treatment of cloud effects on aerosol and trace gases within parameterized shallow and deep convection, and aerosol effects on cloud droplet number, has been implemented in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) version 3.2.1 that can be used to better understand the aerosol life cycle over regional to synoptic scales. The modifications to the model include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convectivemore » cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain–Fritsch (KF) cumulus parameterization that has been modified to better represent shallow convective clouds. Testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS). The simulation results are used to investigate the impact of cloud–aerosol interactions on regional-scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column-integrated BC can be as large as –50% when cloud–aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate under non-precipitating conditions due to sulfate production in the parameterized clouds. The modifications to WRF-Chem are found to account for changes in the cloud droplet number concentration (CDNC) and changes in the chemical composition of cloud droplet residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to the latest version of WRF-Chem, and it
Berg, L. K.; Shrivastava, M.; Easter, R. C.; Fast, J. D.; Chapman, E. G.; Liu, Y.; Ferrare, R. A.
2015-02-24
A new treatment of cloud effects on aerosol and trace gases within parameterized shallow and deep convection, and aerosol effects on cloud droplet number, has been implemented in the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) version 3.2.1 that can be used to better understand the aerosol life cycle over regional to synoptic scales. The modifications to the model include treatment of the cloud droplet number mixing ratio; key cloud microphysical and macrophysical parameters (including the updraft fractional area, updraft and downdraft mass fluxes, and entrainment) averaged over the population of shallow clouds, or a single deep convective cloud; and vertical transport, activation/resuspension, aqueous chemistry, and wet removal of aerosol and trace gases in warm clouds. These changes have been implemented in both the WRF-Chem chemistry packages as well as the Kain–Fritsch (KF) cumulus parameterization that has been modified to better represent shallow convective clouds. Testing of the modified WRF-Chem has been completed using observations from the Cumulus Humilis Aerosol Processing Study (CHAPS). The simulation results are used to investigate the impact of cloud–aerosol interactions on regional-scale transport of black carbon (BC), organic aerosol (OA), and sulfate aerosol. Based on the simulations presented here, changes in the column-integrated BC can be as large as –50% when cloud–aerosol interactions are considered (due largely to wet removal), or as large as +40% for sulfate under non-precipitating conditions due to sulfate production in the parameterized clouds. The modifications to WRF-Chem are found to account for changes in the cloud droplet number concentration (CDNC) and changes in the chemical composition of cloud droplet residuals in a way that is consistent with observations collected during CHAPS. Efforts are currently underway to port the changes described here to the latest version of WRF-Chem, and it is
Spatially resolved scattering polarimeter.
Kohlgraf-Owens, Thomas; Dogariu, Aristide
2009-05-01
We demonstrate a compact, spatially resolved polarimeter based on a coherent optical fiber bundle coupled with a thin layer of scattering centers. The use of scattering for polarization encoding allows the polarimeter to work across broad angular and spectral domains. Optical fiber bundles provide high spatial resolution of the incident field. Because neighboring elements of the bundle interact with the incident field differently, only a single interaction of the fiber bundle with the unknown field is needed to perform the measurement. Experimental results are shown to demonstrate the capability to perform imaging polarimetry. PMID:19412259
FINAL REPORT (DE-FG02-97ER62338): Single-column modeling, GCM parameterizations, and ARM data
Richard C. J. Somerville
2009-02-27
Our overall goal is the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have compared SCM (single-column model) output with ARM observations at the SGP, NSA and TWP sites. We focus on the predicted cloud amounts and on a suite of radiative quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art three-dimensional atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable.
NASA Technical Reports Server (NTRS)
Stauffer, David R.; Seaman, Nelson L.; Munoz, Ricardo C.
2000-01-01
The objective of this investigation was to study the role of shallow convection on the regional water cycle of the Mississippi and Little Washita Basins using a 3-D mesoscale model, the PSUINCAR MM5. The underlying premise of the project was that current modeling of regional-scale climate and moisture cycles over the continents is deficient without adequate treatment of shallow convection. It was hypothesized that an improved treatment of the regional water cycle can be achieved by using a 3-D mesoscale numerical model having a detailed land-surface parameterization, an advanced boundary-layer parameterization, and a more complete shallow convection parameterization than are available in most current models. The methodology was based on the application in the MM5 of new or recently improved parameterizations covering these three physical processes. Therefore, the work plan focused on integrating, improving, and testing these parameterizations in the MM5 and applying them to study water-cycle processes over the Southern Great Plains (SGP): (1) the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE) described by Wetzel and Boone; (2) the 1.5-order turbulent kinetic energy (TKE)-predicting scheme of Shafran et al.; and (3) the hybrid-closure sub-grid shallow convection parameterization of Deng. Each of these schemes has been tested extensively through this study and the latter two have been improved significantly to extend their capabilities.
NASA Astrophysics Data System (ADS)
Hayes, W. W.
In the initial portion of this dissertation studies of Ar scattering from Ru(0001) at thermal and hyperthermal energies are compared to calculations with classical scattering theory. These data exhibited a number of characteristics that are unusual in comparison to other systems for which atomic beam experiments have been carried out under similar conditions. The measured energy losses were unusually small. Some of the angular distributions exhibited an anomalous shoulder feature in addition to a broad peak near the specular direction and quantum mechanical diffraction was observed under conditions for which it was not expected. Many of the unusual features observed in the measurements are explained, but only upon using an effective surface mass of 2.3 Ru atomic masses, which implies collective effects in the Ru crystal. The large effective mass, because it leads to substantially larger Debye-Waller factors, explains and confirms the observations of diffraction features. It also leads to the interesting conclusion that Ru is a metal for which atomic beam scattering measurements in the purely quantum mechanical regime, where diffraction and single-phonon creation are dominant, should be possible not only with He atoms, but with many other atomic species with larger masses. A useful theoretical expression for interpreting and analyzing observed scattering intensity spectra for atomic and molecular collisions with surfaces is the differential reflection coefficient for a smooth, vibrating surface. This differential reflection coefficient depends on a parameter, usually expressed in dimensions of velocity, that arises due to correlated motions of neighboring regions of the surface and can be evaluated if the polarization vectors of the phonons near the surface are known. As a part of this dissertation experimental conditions are suggested under which this velocity paramenter may be more precisely measured than it has been in the past. Experimental data for scattering
Graphitic packing removal tool
Meyers, K.E.; Kolsun, G.J.
1996-12-31
Graphitic packing removal tools are described for removal of the seal rings in one piece from valves and pumps. The packing removal tool has a cylindrical base ring the same size as the packing ring with a surface finish, perforations, knurling or threads for adhesion to the seal ring. Elongated leg shanks are mounted axially along the circumferential center. A slit or slits permit insertion around shafts. A removal tool follower stabilizes the upper portion of the legs to allow a spanner wrench to be used for insertion and removal.
Graphitic packing removal tool
Meyers, K.E.; Kolsun, G.J.
1997-11-11
Graphitic packing removal tools for removal of the seal rings in one piece are disclosed. The packing removal tool has a cylindrical base ring the same size as the packing ring with a surface finish, perforations, knurling or threads for adhesion to the seal ring. Elongated leg shanks are mounted axially along the circumferential center. A slit or slits permit insertion around shafts. A removal tool follower stabilizes the upper portion of the legs to allow a spanner wrench to be used for insertion and removal. 5 figs.
Graphitic packing removal tool
Meyers, Kurt Edward; Kolsun, George J.
1997-01-01
Graphitic packing removal tools for removal of the seal rings in one piece. he packing removal tool has a cylindrical base ring the same size as the packing ring with a surface finish, perforations, knurling or threads for adhesion to the seal ring. Elongated leg shanks are mounted axially along the circumferential center. A slit or slits permit insertion around shafts. A removal tool follower stabilizes the upper portion of the legs to allow a spanner wrench to be used for insertion and removal.
Liu, Ping; Li, Guodong; Liu, Xinggao
2015-09-01
Control vector parameterization (CVP) is an important approach of the engineering optimization for the industrial dynamic processes. However, its major defect, the low optimization efficiency caused by calculating the relevant differential equations in the generated nonlinear programming (NLP) problem repeatedly, limits its wide application in the engineering optimization for the industrial dynamic processes. A novel highly effective control parameterization approach, fast-CVP, is first proposed to improve the optimization efficiency for industrial dynamic processes, where the costate gradient formulae is employed and a fast approximate scheme is presented to solve the differential equations in dynamic process simulation. Three well-known engineering optimization benchmark problems of the industrial dynamic processes are demonstrated as illustration. The research results show that the proposed fast approach achieves a fine performance that at least 90% of the computation time can be saved in contrast to the traditional CVP method, which reveals the effectiveness of the proposed fast engineering optimization approach for the industrial dynamic processes.
Veron, Dana E
2009-03-12
This project had two primary goals: 1) development of stochastic radiative transfer as a parameterization that could be employed in an AGCM environment, and 2) exploration of the stochastic approach as a means for representing shortwave radiative transfer through mixed-phase layer clouds. To achieve these goals, an analysis of the performance of the stochastic approach was performed, a simple stochastic cloud-radiation parameterization for an AGCM was developed and tested, a statistical description of Arctic mixed phase clouds was developed and the appropriateness of stochastic approach for representing radiative transfer through mixed-phase clouds was assessed. Significant progress has been made in all of these areas and is detailed below.
Dana E. Veron
2012-04-09
This project had two primary goals: (1) development of stochastic radiative transfer as a parameterization that could be employed in an AGCM environment, and (2) exploration of the stochastic approach as a means for representing shortwave radiative transfer through mixed-phase layer clouds. To achieve these goals, climatology of cloud properties was developed at the ARM CART sites, an analysis of the performance of the stochastic approach was performed, a simple stochastic cloud-radiation parameterization for an AGCM was developed and tested, a statistical description of Arctic mixed phase clouds was developed and the appropriateness of stochastic approach for representing radiative transfer through mixed-phase clouds was assessed. Significant progress has been made in all of these areas and is detailed in the final report.
Application of a semi-spectral cloud water parameterization to cooling tower plumes simulations
NASA Astrophysics Data System (ADS)
Bouzereau, Emmanuel; Musson Genon, Luc; Carissimo, Bertrand
2008-10-01
In order to simulate the plume produced by large natural draft cooling towers, a semi-spectral warm cloud parameterization has been implemented in an anelastic and non-hydrostatic 3D micro-scale meteorological code. The model results are compared to observations from a detailed field experiment carried out in 1980 at Bugey (location of an electrical nuclear power plant in the Rhône valley in East Central France) including airborne dynamical and microphysical measurements. Although we observe a slight overestimation of the liquid-water content, the results are satisfactory for all the 15 different cases simulated, which include different meteorological conditions ranging from low wind speed and convective conditions in clear sky to high wind and very cloudy. Such parameterization, which includes semi-spectral determination for droplet spectra, seems to be promising to describe plume interaction with atmosphere especially for aerosols and cloud droplets.
The interpretation of remotely sensed cloud properties from a model parameterization perspective
1995-09-01
The goals of ISCCP and FIRE are, broadly speaking, to provide methods for the retrieval of cloud properties from satellites, and to improve cloud radiation models and the parameterization of clouds in GCMs. This study suggests a direction for GCM cloud parameterizations based on analysis of Landsat and ISCCP satellite data. For low level single layer clouds it is found that the mean retrieved liquid water pathe in cloudy pixels is essentially invariant to the cloud fraction, at least in the range 0.2 - 0.8. This result is very important since it allows the cloud fraction to be estimated if the mean liquid water path of cloud in a general circulation model gridcell is known. 3 figs.
The meridional variation of the eddy heat fluxes by baroclinic waves and their parameterization
NASA Technical Reports Server (NTRS)
Stone, P. H.
1974-01-01
The meridional and vertical eddy fluxes of sensible heat produced by small-amplitude growing baroclinic waves are calculated using solutions to the two-level model with horizontal shear in the mean flow. The results show that the fluxes are primarily dependent on the local baroclinicity, i.e., the local value of the isentropic slopes in the mean state. Where the slope exceeds the critical value, the transports are poleward and upward; where the slope is less than the critical value, the transports are equatorward and downward. These results are used to improve an earlier parameterization of the tropospheric eddy fluxes of sensible heat based on Eady's model. Comparisons with observations show that the improved parameterization reproduces the observed magnitude and sign of the eddy fluxes and their vertical variations and seasonal changes, but the maximum in the poleward flux is too near the equator.
NASA Technical Reports Server (NTRS)
Fritsch, J. Michael; Kain, John S.
1997-01-01
Research efforts during the second year have centered on improving the manner in which convective stabilization is achieved in the Penn State/NCAR mesoscale model MM5. Ways of improving this stabilization have been investigated by (1) refining the partitioning between the Kain-Fritsch convective parameterization scheme and the grid scale by introducing a form of moist convective adjustment; (2) using radar data to define locations of subgrid-scale convection during a dynamic initialization period; and (3) parameterizing deep-convective feedbacks as subgrid-scale sources and sinks of mass. These investigations were conducted by simulating a long-lived convectively-generated mesoscale vortex that occurred during 14-18 Jul. 1982 and the 10-11 Jun. 1985 squall line that occurred over the Kansas-Oklahoma region during the PRE-STORM experiment. The long-lived vortex tracked across the central Plains states and was responsible for multiple convective outbreaks during its lifetime.
Application of a planetary wave breaking parameterization to stratospheric circulation statistics
NASA Technical Reports Server (NTRS)
Randel, William J.; Garcia, Rolando R.
1994-01-01
The planetary wave parameterization scheme developed recently by Garcia is applied to statospheric circulation statistics derived from 12 years of National Meteorological Center operational stratospheric analyses. From the data a planetary wave breaking criterion (based on the ratio of the eddy to zonal mean meridional potential vorticity (PV) gradients), a wave damping rate, and a meridional diffusion coefficient are calculated. The equatorward flank of the polar night jet during winter is identified as a wave breaking region from the observed PV gradients; the region moves poleward with season, covering all high latitudes in spring. Derived damping rates maximize in the subtropical upper stratosphere (the 'surf zone'), with damping time scales of 3-4 days. Maximum diffusion coefficients follow the spatial patterns of the wave breaking criterion, with magnitudes comparable to prior published estimates. Overall, the observed results agree well with the parameterized calculations of Garcia.
A cumulus parameterization scheme designed for nested grid meso-{beta} scale models
Weissbluth, M.J.; Cotton, W.R.
1991-12-31
A generalized cumulus parameterization based upon higher order turbulence closure has been incorporated into one dimensional simulations. The scheme consists of a level 2.5w turbulence closure scheme mated with a convective adjustment scheme. The convective adjustment scheme includes a gradient term which can be interpreted as either a subsidence term when the scheme is used in large scale models or a mesoscale compensation term when the scheme is used in mesoscale models. The scheme also includes a convective adjustment term which is interpreted as a detrainment term in large scale models. In mesoscale models, the mesoscale compensation term and the advection by the mean vertical motions combine to yield no net advection which is desirable since the convective moistening and heating is now wholly accomplished by the convective adjustment term; double counting is then explicitly eliminated. One dimensional simulations indicate satisfactory performance of the cumulus parameterization scheme for a non-entraining updraft.
A cumulus parameterization scheme designed for nested grid meso-. beta. scale models
Weissbluth, M.J.; Cotton, W.R.
1991-01-01
A generalized cumulus parameterization based upon higher order turbulence closure has been incorporated into one dimensional simulations. The scheme consists of a level 2.5w turbulence closure scheme mated with a convective adjustment scheme. The convective adjustment scheme includes a gradient term which can be interpreted as either a subsidence term when the scheme is used in large scale models or a mesoscale compensation term when the scheme is used in mesoscale models. The scheme also includes a convective adjustment term which is interpreted as a detrainment term in large scale models. In mesoscale models, the mesoscale compensation term and the advection by the mean vertical motions combine to yield no net advection which is desirable since the convective moistening and heating is now wholly accomplished by the convective adjustment term; double counting is then explicitly eliminated. One dimensional simulations indicate satisfactory performance of the cumulus parameterization scheme for a non-entraining updraft.
Mesoscale model parameterizations for radiation and turbulent fluxes at the lower boundary
NASA Astrophysics Data System (ADS)
Somieski, Franz
1988-11-01
A radiation parameterization scheme for use in mesoscale models with orography and clouds was developed. Broadband parameterizations are presented for the solar and the terrestrial spectral ranges. They account for clear, turbid, or cloudy atmospheres. The scheme is one-dimensional in the atmosphere, but the effects of mountains (inclination, shading, elevated horizon) are taken into account at the surface. In the terrestrial brand, gray and black clouds are considered. The calculation of turbulent fluxes of sensible and latent heat and momentum at an inclined lower model boundary is described. Surface-layer similarity and the surface energy budget are used to evaluate the ground surface temperature. The total scheme is part of the mesoscale model MESOSCOP.
NASA Technical Reports Server (NTRS)
Boers, R.; Eloranta, E. W.; Coulter, R. L.
1984-01-01
Ground based lidar measurements of the atmospheric mixed layer depth, the entrainment zone depth and the wind speed and wind direction were used to test various parameterized entrainment models of mixed layer growth rate. Six case studies under clear air convective conditions over flat terrain in central Illinois are presented. It is shown that surface heating alone accounts for a major portion of the rise of the mixed layer on all days. A new set of entrainment model constants was determined which optimized height predictions for the dataset. Under convective conditions, the shape of the mixed layer height prediction curves closely resembled the observed shapes. Under conditions when significant wind shear was present, the shape of the height prediction curve departed from the data suggesting deficiencies in the parameterization of shear production. Development of small cumulus clouds on top of the layer is shown to affect mixed layer depths in the afternoon growth phase.
Study of NWP parameterizations on extreme precipitation events over Basque Country
NASA Astrophysics Data System (ADS)
Gelpi, Iván R.; Gaztelumendi, Santiago; Carreño, Sheila; Hernández, Roberto; Egaña, Joseba
2016-08-01
The Weather Research and Forecasting model (WRF), like other numerical models, can make use of several parameterization schemes. The purpose of this study is to determine how available cumulus parameterization (CP) and microphysics (MP) schemes in the WRF model simulate extreme precipitation events in the Basque Country. Possible combinations among two CP schemes (Kain-Fritsch and Betts-Miller-Janjic) and five MP (WSM3, Lin, WSM6, new Thompson and WDM6) schemes were tested. A set of simulations, corresponding to 21st century extreme precipitation events that have caused significant flood episodes have been compared with point observational data coming from the Basque Country Automatic Weather Station Mesonetwork. Configurations with Kain-Fritsch CP scheme produce better quantity of precipitation forecast (QPF) than BMJ scheme configurations. Depending on the severity level and the river basin analysed different MP schemes show the best behaviours, demonstrating that there is not a unique configuration that solve exactly all the studied events.
Fluorescence and Light Scattering
ERIC Educational Resources Information Center
Clarke, Ronald J.; Oprysa, Anna
2004-01-01
The aim of the mentioned experiment is to aid students in developing tactics for distinguishing between signals originating from fluorescence and light scattering. Also, the experiment provides students with a deeper understanding of the physicochemical bases of each phenomenon and shows that the techniques are actually related.
Nanowire electron scattering spectroscopy
NASA Technical Reports Server (NTRS)
Hunt, Brian D. (Inventor); Bronikowski, Michael (Inventor); Wong, Eric W. (Inventor); von Allmen, Paul (Inventor); Oyafuso, Fabiano A. (Inventor)
2009-01-01
Methods and devices for spectroscopic identification of molecules using nanoscale wires are disclosed. According to one of the methods, nanoscale wires are provided, electrons are injected into the nanoscale wire; and inelastic electron scattering is measured via excitation of low-lying vibrational energy levels of molecules bound to the nanoscale wire.
Critical fluid light scattering
NASA Technical Reports Server (NTRS)
Gammon, Robert W.
1988-01-01
The objective is to measure the decay rates of critical density fluctuations in a simple fluid (xenon) very near its liquid-vapor critical point using laser light scattering and photon correlation spectroscopy. Such experiments were severely limited on Earth by the presence of gravity which causes large density gradients in the sample when the compressibility diverges approaching the critical point. The goal is to measure fluctuation decay rates at least two decades closer to the critical point than is possible on earth, with a resolution of 3 microK. This will require loading the sample to 0.1 percent of the critical density and taking data as close as 100 microK to the critical temperature. The minimum mission time of 100 hours will allow a complete range of temperature points to be covered, limited by the thermal response of the sample. Other technical problems have to be addressed such as multiple scattering and the effect of wetting layers. The experiment entails measurement of the scattering intensity fluctuation decay rate at two angles for each temperature and simultaneously recording the scattering intensities and sample turbidity (from the transmission). The analyzed intensity and turbidity data gives the correlation length at each temperature and locates the critical temperature. The fluctuation decay rate data from these measurements will provide a severe test of the generalized hydrodynamic theories of transport coefficients in the critical regions. When compared to equivalent data from binary liquid critical mixtures they will test the universality of critical dynamics.
Inelastic Scattering Form Factors
1992-01-01
ATHENA-IV computes form factors for inelastic scattering calculations, using single-particle wave functions that are eigenstates of motion in either a Woods-Saxon potential well or a harmonic oscillator well. Two-body forces of Gauss, Coulomb, Yukawa, and a sum of cut-off Yukawa radial dependences are available.
Elastic scattering using an artificial confining potential.
Mitroy, J; Zhang, J Y; Varga, K
2008-09-19
The discrete energies of a scattering Hamiltonian calculated under the influence of an artificial confining potential of almost arbitrary functional form can be used to determine its phase shifts. The method exploits the result that two short-range Hamiltonians having the same energy will have the same phase shifts upon removal of the confining potential. An initial verification is performed on a simple model problem. Then the stochastic variational method is used to determine the energies of the confined e(-)-He(2)S(e) system and thus determine the low energy phase shifts.
Parameterizing the Transport Pathways for Cell Invasion in Complex Scaffold Architectures
Ashworth, Jennifer C.; Mehr, Marco; Buxton, Paul G.; Best, Serena M.
2016-01-01
Interconnecting pathways through porous tissue engineering scaffolds play a vital role in determining nutrient supply, cell invasion, and tissue ingrowth. However, the global use of the term “interconnectivity” often fails to describe the transport characteristics of these pathways, giving no clear indication of their potential to support tissue synthesis. This article uses new experimental data to provide a critical analysis of reported methods for the description of scaffold transport pathways, ranging from qualitative image analysis to thorough structural parameterization using X-ray Micro-Computed Tomography. In the collagen scaffolds tested in this study, it was found that the proportion of pore space perceived to be accessible dramatically changed depending on the chosen method of analysis. Measurements of % interconnectivity as defined in this manner varied as a function of direction and connection size, and also showed a dependence on measurement length scale. As an alternative, a method for transport pathway parameterization was investigated, using percolation theory to calculate the diameter of the largest sphere that can travel to infinite distance through a scaffold in a specified direction. As proof of principle, this approach was used to investigate the invasion behavior of primary fibroblasts in response to independent changes in pore wall alignment and pore space accessibility, parameterized using the percolation diameter. The result was that both properties played a distinct role in determining fibroblast invasion efficiency. This example therefore demonstrates the potential of the percolation diameter as a method of transport pathway parameterization, to provide key structural criteria for application-based scaffold design. PMID:26888449
NASA Technical Reports Server (NTRS)
Entekhabi, D.; Eagleson, P. S.
1989-01-01
Parameterizations are developed for the representation of subgrid hydrologic processes in atmospheric general circulation models. Reasonable a priori probability density functions of the spatial variability of soil moisture and of precipitation are introduced. These are used in conjunction with the deterministic equations describing basic soil moisture physics to derive expressions for the hydrologic processes that include subgrid scale variation in parameters. The major model sensitivities to soil type and to climatic forcing are explored.
Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration
Doherty, John E.; Hunt, Randall J.
2010-01-01
Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.
NASA Astrophysics Data System (ADS)
Laurent, A.; Fennel, K.; Wilson, R.; Lehrter, J.; Devereux, R.
2016-01-01
Diagenetic processes are important drivers of water column biogeochemistry in coastal areas. For example, sediment oxygen consumption can be a significant contributor to oxygen depletion in hypoxic systems, and sediment-water nutrient fluxes support primary productivity in the overlying water column. Moreover, nonlinearities develop between bottom water conditions and sediment-water fluxes due to loss of oxygen-dependent processes in the sediment as oxygen becomes depleted in bottom waters. Yet, sediment-water fluxes of chemical species are often parameterized crudely in coupled physical-biogeochemical models, using simple linear parameterizations that are only poorly constrained by observations. Diagenetic models that represent sediment biogeochemistry are available, but rarely are coupled to water column biogeochemical models because they are computationally expensive. Here, we apply a method that efficiently parameterizes sediment-water fluxes of oxygen, nitrate and ammonium by combining in situ measurements, a diagenetic model and a parameter optimization method. As a proof of concept, we apply this method to the Louisiana Shelf where high primary production, stimulated by excessive nutrient loads from the Mississippi-Atchafalaya River system, promotes the development of hypoxic bottom waters in summer. The parameterized sediment-water fluxes represent nonlinear feedbacks between water column and sediment processes at low bottom water oxygen concentrations, which may persist for long periods (weeks to months) in hypoxic systems such as the Louisiana Shelf. This method can be applied to other systems and is particularly relevant for shallow coastal and estuarine waters where the interaction between sediment and water column is strong and hypoxia is prone to occur due to land-based nutrient loads.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
NASA Astrophysics Data System (ADS)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-01-01
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
A parameterization of respiration in frozen soils based on substrate availability
NASA Astrophysics Data System (ADS)
Schaefer, Kevin; Jafarov, Elchin
2016-04-01
Respiration in frozen soils is limited to thawed substrate within the thin water films surrounding soil particles. As temperatures decrease and the films become thinner, the available substrate also decreases, with respiration effectively ceasing at -8 °C. Traditional exponential scaling factors to model this effect do not account for substrate availability and do not work at the century to millennial timescales required to model the fate of the nearly 1100 Gt of carbon in permafrost regions. The exponential scaling factor produces a false, continuous loss of simulated permafrost carbon in the 20th century and biases in estimates of potential emissions as permafrost thaws in the future. Here we describe a new frozen biogeochemistry parameterization that separates the simulated carbon into frozen and thawed pools to represent the effects of substrate availability. We parameterized the liquid water fraction as a function of temperature based on observations and use this to transfer carbon between frozen pools and thawed carbon in the thin water films. The simulated volumetric water content (VWC) as a function of temperature is consistent with observed values and the simulated respiration fluxes as a function of temperature are consistent with results from incubation experiments. The amount of organic matter was the single largest influence on simulated VWC and respiration fluxes. Future versions of the parameterization should account for additional, non-linear effects of substrate diffusion in thin water films on simulated respiration. Controlling respiration in frozen soils based on substrate availability allows us to maintain a realistic permafrost carbon pool by eliminating the continuous loss caused by the original exponential scaling factors. The frozen biogeochemistry parameterization is a useful way to represent the effects of substrate availability on soil respiration in model applications that focus on century to millennial timescales in permafrost regions.
Subgrid-scale physical parameterization in atmospheric modeling: How can we make it consistent?
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi
2016-07-01
Approaches to subgrid-scale physical parameterization in atmospheric modeling are reviewed by taking turbulent combustion flow research as a point of reference. Three major general approaches are considered for its consistent development: moment, distribution density function (DDF), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in geophysics and engineering. The DDF (commonly called PDF) approach is intuitively appealing as it deals with a distribution of variables in subgrid scale in a more direct manner. Mode decomposition was originally applied by Aubry et al (1988 J. Fluid Mech. 192 115-73) in the context of wall boundary-layer turbulence. It is specifically designed to represent coherencies in compact manner by a low-dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (empirical orthogonal functions) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. Among those, wavelet is a particularly attractive alternative. The mass-flux formulation that is currently adopted in the majority of atmospheric models for parameterizing convection can also be considered a special case of mode decomposition, adopting segmentally constant modes for the expansion basis. This perspective further identifies a very basic but also general geometrical constraint imposed on the massflux formulation: the segmentally-constant approximation. Mode decomposition can, furthermore, be understood by analogy with a Galerkin method in numerically modeling. This analogy suggests that the subgrid parameterization may be re-interpreted as a type of mesh-refinement in numerical modeling. A link between the subgrid parameterization and downscaling problems is also pointed out.
Global Simulations from CAM with a Unified Convection Parameterization using CLUBB and Subcolumns
NASA Astrophysics Data System (ADS)
Thayer-Calder, K.; Gettelman, A.; Craig, C.; Goldhaber, S.; Bogenschutz, P.; Chen, C. C.; Morrison, H.; Hoft, J.; Raut, E.; Griffin, B. M.; Weber, J. K.; Larson, V. E.; Wyant, M. C.; Wang, M.; Ghan, S.; Guo, Z.
2015-12-01
The newest version of the Community Atmosphere Model (CAM) will support subcolumns as a method to better couple sub-grid-scale convective and microphysical processes. We utilize this feature and samples from a PDF-based moist turbulence parameterization to produce a version of CAM where all convection (shallow, stratiform, and deep) is simulated with a single set of dynamic and microphysical equations. We call this version of the model CAM-CLUBB-SILHS, where CLUBB (Cloud Layers Unified By Binormals) is our higher-order closure convection and turbulence parameterization and SILHS (Subgrid Importance Latin Hypercube Sampler) is our sampler and the basis for our subcolumn generation. Each physics timestep in this model, the CLUBB parameterization runs to calculate convective tendencies. In order to close the higher order moments, CLUBB calculates a new multi-variate PDF describing the subgrid distribution of moisture and temperature at each level. SILHS samples from that PDF and creates profiles of vapor, temperature, vertical velocity, cloud water and ice, and cloud water and ice number concentration. The microphysics scheme runs on each subcolumn seperately. The resulting tendencies are averaged together and returned to the model as a grid mean tendency. This use of subcolumns allows us to explicitly represent subgrid scale clouds and moisture distributions for microphysical calculations. Using this framework and no other convective parameterizations, we are able to produce stable, realistic, global atmospheric simulations in CAM. This study will present results from long-term atmospheric simulations, discuss the impact of subcolumns on the model, and show improvements in the model's tropical wave simulation.
Limitations of one-dimensional mesoscale PBL parameterizations in reproducing mountain-wave flows
Munoz-Esparza, Domingo; Sauer, Jeremy A.; Linn, Rodman R.; Kosovic, Branko
2015-12-08
In this study, mesoscale models are considered to be the state of the art in modeling mountain-wave flows. Herein, we investigate the role and accuracy of planetary boundary layer (PBL) parameterizations in handling the interaction between large-scale mountain waves and the atmospheric boundary layer. To that end, we use recent large-eddy simulation (LES) results of mountain waves over a symmetric two-dimensional bell-shaped hill [Sauer et al., J. Atmos. Sci. (2015)], and compare them to four commonly used PBL schemes. We find that one-dimensional PBL parameterizations produce reasonable agreement with the LES results in terms of vertical wavelength, amplitude of velocity and turbulent kinetic energy distribution in the downhill shooting flow region. However, the assumption of horizontal homogeneity in PBL parameterizations does not hold in the context of these complex flow configurations. This inappropriate modeling assumption results in a vertical wavelength shift producing errors of ≈ 10 m s–1 at downstream locations due to the presence of a coherent trapped lee wave that does not mix with the atmospheric boundary layer. In contrast, horizontally-integrated momentum flux derived from these PBL schemes displays a realistic pattern. Therefore results from mesoscale models using ensembles of one-dimensional PBL schemes can still potentially be used to parameterize drag effects in general circulation models. Nonetheless, three-dimensional PBL schemes must be developed in order for mesoscale models to accurately represent complex-terrain and other types of flows where one-dimensional PBL assumptions are violated.
Limitations of one-dimensional mesoscale PBL parameterizations in reproducing mountain-wave flows
Munoz-Esparza, Domingo; Sauer, Jeremy A.; Linn, Rodman R.; Kosovic, Branko
2015-12-08
In this study, mesoscale models are considered to be the state of the art in modeling mountain-wave flows. Herein, we investigate the role and accuracy of planetary boundary layer (PBL) parameterizations in handling the interaction between large-scale mountain waves and the atmospheric boundary layer. To that end, we use recent large-eddy simulation (LES) results of mountain waves over a symmetric two-dimensional bell-shaped hill [Sauer et al., J. Atmos. Sci. (2015)], and compare them to four commonly used PBL schemes. We find that one-dimensional PBL parameterizations produce reasonable agreement with the LES results in terms of vertical wavelength, amplitude of velocitymore » and turbulent kinetic energy distribution in the downhill shooting flow region. However, the assumption of horizontal homogeneity in PBL parameterizations does not hold in the context of these complex flow configurations. This inappropriate modeling assumption results in a vertical wavelength shift producing errors of ≈ 10 m s–1 at downstream locations due to the presence of a coherent trapped lee wave that does not mix with the atmospheric boundary layer. In contrast, horizontally-integrated momentum flux derived from these PBL schemes displays a realistic pattern. Therefore results from mesoscale models using ensembles of one-dimensional PBL schemes can still potentially be used to parameterize drag effects in general circulation models. Nonetheless, three-dimensional PBL schemes must be developed in order for mesoscale models to accurately represent complex-terrain and other types of flows where one-dimensional PBL assumptions are violated.« less
Sensitivity of Tropical Cyclones to Parameterized Convection in the NASA GEOS5 Model
NASA Technical Reports Server (NTRS)
Lim, Young-Kwon; Schubert, Siegfried D.; Reale, Oreste; Lee, Myong-In; Molod, Andrea M.; Suarez, Max J.
2014-01-01
The sensitivity of tropical cyclones (TCs) to changes in parameterized convection is investigated to improve the simulation of TCs in the North Atlantic. Specifically, the impact of reducing the influence of the Relaxed Arakawa-Schubert (RAS) scheme-based parameterized convection is explored using the Goddard Earth Observing System version5 (GEOS5) model at 0.25 horizontal resolution. The years 2005 and 2006 characterized by very active and inactive hurricane seasons, respectively, are selected for simulation. A reduction in parameterized deep convection results in an increase in TC activity (e.g., TC number and longer life cycle) to more realistic levels compared to the baseline control configuration. The vertical and horizontal structure of the strongest simulated hurricane shows the maximum lower-level (850-950hPa) wind speed greater than 60 ms and the minimum sea level pressure reaching 940mb, corresponding to a category 4 hurricane - a category never achieved by the control configuration. The radius of the maximum wind of 50km, the location of the warm core exceeding 10 C, and the horizontal compactness of the hurricane center are all quite realistic without any negatively affecting the atmospheric mean state. This study reveals that an increase in the threshold of minimum entrainment suppresses parameterized deep convection by entraining more dry air into the typical plume. This leads to cooling and drying at the mid- to upper-troposphere, along with the positive latent heat flux and moistening in the lower-troposphere. The resulting increase in conditional instability provides an environment that is more conducive to TC vortex development and upward moisture flux convergence by dynamically resolved moist convection, thereby increasing TC activity.
Impact of Parameterized Lee Wave Drag on the Energy Budget of an Eddying Global Ocean Model
NASA Astrophysics Data System (ADS)
Trossman, D. S.; Arbic, B. K.; Garner, S.; Goff, J. A.; Jayne, S. R.; Metzger, E.; Wallcraft, A.
2012-12-01
We examine the impact of a lee wave drag parameterization on an eddying global ocean model. The wave drag parameterization represents the the momentum transfer associated with the generation of lee waves arising from geostrophic flow impinging upon rough topography. It is included in the online model, thus ensuring that abyssal currents and stratification in the simulation are affected by the presence of the wave drag. The model utilized here is the nominally 1/12th degree Hybrid Coordinate Ocean Model (HYCOM) forced by winds and air-sea buoyancy fluxes. An energy budget including the parameterized wave drag, quadratic bottom boundary layer drag, vertical eddy viscosity, and horizontal eddy viscosity is diagnosed during the model runs and compared with the wind power input and buoyancy fluxes. Wave drag and vertical viscosity are the largest of the mechanical energy dissipation rate terms, each more than half of a terawatt when globally integrated. The sum of all four dissipative terms approximately balances the rate of energy put by the winds and buoyancy fluxes into the ocean. An ad hoc global enhancement of the bottom drag at each grid point by a constant factor cannot serve as a perfect substitute for wave drag, particularly where there is little wave drag. Eddy length scales at the surface, sea surface height variance, surface kinetic energy, and positions of intensified jets in the model are compared with those inferred from altimetric observations. Vertical profiles of kinetic energy from the model are compared with mooring observations to investigate whether the model is improved when wave drag is inserted.; The drag and viscosity terms in our energy budget [log_10(W m^-2)]: (a) quadratic bottom boundary layer drag, (b) parameterized internal lee wave drag, (c) vertical viscosity, and (d) "horizontal" viscosity. Shown is an average of inline estimates over one year of the spin-up phase with wave drag.
Large eddy simulation for evaluating scale-aware subgrid cloud parameterizations
NASA Astrophysics Data System (ADS)
Huang, Wei; Chen, Baode; Bao, Jian-Wen
2016-04-01
We present results from an ongoing project that uses a Large-Eddy Simulation (LES) model to simulate deep organized convection in the extratropics for the purpose of evaluating scale-aware subgrid convective parameterizations. The simulation is carried out for a classical idealized supercell thunderstorm (Weisman and Klemp, 1982), using a total of 1201 × 1201 × 200 grid points at 100 m spacing in both the horizontal and vertical directions. The characteristics of simulated clouds exhibit a multi-mode vertical distribution ranging from deep to shallow clouds, which is similar to that observed in the real world. To use the LES dataset for evaluating scale-aware subgrid cloud parameterizations, the same case is also run with progressively larger grid sizes of 200 m, 400 m, 600 m, 1 km and 3 km. These simulations show a reasonable agreement with the benchmark LES in statistics such as convective available potential energy, convective inhibition, cloud fraction and precipitation rates. They provide useful information about the effect of horizontal grid resolution on the subgrid convective parameterizations. All these simulations reveal a similar multi-mode cloud distribution in the vertical direction. However, there are differences in the updraft-core cloud statistics, and convergence of statistical properties is found only between the LES benchmark and the simulation with grid size smaller than 400 m. Analysis of the LES results indicates that (1) the average subgrid mass flux increases as the horizontal grid size increases; (2) the vertical scale of subgrid transport varies spatially, suggesting a system dependence; and (3) at even 1 km, sub-grid convective transport is still large enough to be accounted for through parameterization.
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-08-16
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
NASA Astrophysics Data System (ADS)
Qin, Jun; Tang, Wenjun; Yang, Kun; Lu, Ning; Niu, Xiaolei; Liang, Shunlin
2015-05-01
Surface solar irradiance (SSI) is required in a wide range of scientific researches and practical applications. Many parameterization schemes are developed to estimate it using routinely measured meteorological variables, since SSI is directly measured at a very limited number of stations. Even so, meteorological stations are still sparse, especially in remote areas. Remote sensing can be used to map spatiotemporally continuous SSI. Considering the huge amount of satellite data, coarse-resolution SSI has been estimated for reducing the computational burden when the estimation is based on a complex radiative transfer model. On the other hand, many empirical relationships are used to enhance the retrieval efficiency, but the accuracy cannot be guaranteed out of regions where they are locally calibrated. In this study, an efficient physically based parameterization is proposed to balance computational efficiency and retrieval accuracy for SSI estimation. In this parameterization, the transmittances for gases, aerosols, and clouds are all handled in full band form and the multiple reflections between the atmosphere and surface are explicitly taken into account. The newly proposed parameterization is applied to estimate SSI with both Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric and land products as inputs. These retrievals are validated against in situ measurements at the Surface Radiation Budget Network and at the North China Plain on an instantaneous basis, and moreover, they are validated and compared with Global Energy and Water Exchanges-Surface Radiation Budget and International Satellite Cloud Climatology Project-flux data SSI estimates at radiation stations of China Meteorological Administration on a daily mean basis. The estimation results indicates that the newly proposed SSI estimation scheme can effectively retrieve SSI based on MODIS products with mean root-mean-square errors of about 100 Wm- 1 and 35 Wm- 1 on an instantaneous and daily
Modeling late rectal toxicities based on a parameterized representation of the 3D dose distribution
NASA Astrophysics Data System (ADS)
Buettner, Florian; Gulliford, Sarah L.; Webb, Steve; Partridge, Mike
2011-04-01
Many models exist for predicting toxicities based on dose-volume histograms (DVHs) or dose-surface histograms (DSHs). This approach has several drawbacks as firstly the reduction of the dose distribution to a histogram results in the loss of spatial information and secondly the bins of the histograms are highly correlated with each other. Furthermore, some of the complex nonlinear models proposed in the past lack a direct physical interpretation and the ability to predict probabilities rather than binary outcomes. We propose a parameterized representation of the 3D distribution of the dose to the rectal wall which explicitly includes geometrical information in the form of the eccentricity of the dose distribution as well as its lateral and longitudinal extent. We use a nonlinear kernel-based probabilistic model to predict late rectal toxicity based on the parameterized dose distribution and assessed its predictive power using data from the MRC RT01 trial (ISCTRN 47772397). The endpoints under consideration were rectal bleeding, loose stools, and a global toxicity score. We extract simple rules identifying 3D dose patterns related to a specifically low risk of complication. Normal tissue complication probability (NTCP) models based on parameterized representations of geometrical and volumetric measures resulted in areas under the curve (AUCs) of 0.66, 0.63 and 0.67 for predicting rectal bleeding, loose stools and global toxicity, respectively. In comparison, NTCP models based on standard DVHs performed worse and resulted in AUCs of 0.59 for all three endpoints. In conclusion, we have presented low-dimensional, interpretable and nonlinear NTCP models based on the parameterized representation of the dose to the rectal wall. These models had a higher predictive power than models based on standard DVHs and their low dimensionality allowed for the identification of 3D dose patterns related to a low risk of complication.
Alonso, Rocío; Elvira, Susana; Sanz, María J; Emberson, Lisa; Gimeno, Benjamín S
2007-01-01
An ozone (O3) deposition model (DO3SE) is currently used in Europe to define the areas where O3 concentrations lead to absorbed O3 doses that exceed the flux-based critical levels above which phytotoxic effects would be likely recorded. This mapping exercise relies mostly on the accurate estimation of O3 flux through plant stomata. However, the present parameterization of the modulation of stomatal conductance (g(s)) behavior by different environmental variables needs further adjustment if O3 phytotoxicity is to be assessed accurately at regional or continental scales. A new parameterization of the model is proposed for Holm oak (Quercus ilex), a tree species that has been selected as a surrogate for all Mediterranean evergreen broadleaf species. This parameterization was based on a literature review, and was calibrated and validated using experimentally measured data of g(s) and several atmospheric and soil parameters recorded at three sites of the Iberian Peninsula experiencing long summer drought, and very cold and dry winter air (El Pardo and Miraflores) or milder conditions (Tietar). A fairly good agreement was found between modeled and measured data (R2 = 0.64) at Tietar. However, a reasonable performance (R2 = 0.47-0.62) of the model was only achieved at the most continental sites when g(s) and soil moisture deficit relationships were considered. The influence of root depth on g(s) estimation is discussed and recommendations are made to build up separate parameterizations for continental and marine-influenced Holm oak sites in the future.
NASA Astrophysics Data System (ADS)
Charles, T. K.; Paganin, D. M.; Dowd, R. T.
2016-08-01
Intrinsic emittance is often the limiting factor for brightness in fourth generation light sources and as such, a good understanding of the factors affecting intrinsic emittance is essential in order to be able to decrease it. Here we present a parameterization model describing the proportional increase in emittance induced by cathode surface roughness. One major benefit behind the parameterization approach presented here is that it takes the complexity of a Monte Carlo model and reduces the results to a straight-forward empirical model. The resulting models describe the proportional increase in transverse momentum introduced by surface roughness, and are applicable to various metal types, photon wavelengths, applied electric fields, and cathode surface terrains. The analysis includes the increase in emittance due to changes in the electric field induced by roughness as well as the increase in transverse momentum resultant from the spatially varying surface normal. We also compare the results of the Parameterization Model to an Analytical Model which employs various approximations to produce a more compact expression with the cost of a reduction in accuracy.
Comparison and validation of physical wave parameterizations in spectral wave models
NASA Astrophysics Data System (ADS)
Stopa, Justin E.; Ardhuin, Fabrice; Babanin, Alexander; Zieger, Stefan
2016-07-01
Recent developments in the physical parameterizations available in spectral wave models have already been validated, but there is little information on their relative performance especially with focus on the higher order spectral moments and wave partitions. This study concentrates on documenting their strengths and limitations using satellite measurements, buoy spectra, and a comparison between the different models. It is confirmed that all models perform well in terms of significant wave heights; however higher-order moments have larger errors. The partition wave quantities perform well in terms of direction and frequency but the magnitude and directional spread typically have larger discrepancies. The high-frequency tail is examined through the mean square slope using satellites and buoys. From this analysis it is clear that some models behave better than the others, suggesting their parameterizations match the physical processes reasonably well. However none of the models are entirely satisfactory, pointing to poorly constrained parameterizations or missing physical processes. The major space-time differences between the models are related to the swell field which stresses the importance of describing its evolution. An example swell field confirms the wave heights can be notably different between model configurations while the directional distributions remain similar. It is clear that all models have difficulty describing the directional spread. Therefore, knowledge of the source term directional distributions is paramount to improve the wave model physics in the future.
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results.
NASA Astrophysics Data System (ADS)
Martins, Luís F. Lages; Rebordão, José Manuel N. V.; Ribeiro, Álvaro Silva
2015-01-01
We aim at the intrinsic parameterization of a computational optical system applied in long-distance displacement measurement of large-scale structures. In this structural-monitoring scenario, the observation distance established between the digital camera and reference targets, which is composed of the computational optical system, can range from 100 up to 1000 m, requiring the use of long-focal length lenses in order to obtain a suitable sensitivity for the three-dimensional displacement measurement of the observed structure which can be of reduced magnitude. Intrinsic parameterization of long-focal length cameras is an emergent issue since conventional approaches applied for reduced focal length cameras are not suitable mainly due to ill-conditioned matrices in least squares estimation procedures. We describe the intrinsic parameterization of a long-focal length camera (600 mm) by the diffractive optical element method and present the obtained estimates and measurement uncertainties, discussing their contribution for the system's validation by calibration field test and displacement measurement campaigns in a long-span suspension bridge.
NASA Technical Reports Server (NTRS)
Natarajan, Murali; Fairlie, T. Duncan; Dwyer Cianciolo, Alicia; Smith, Michael D.
2015-01-01
We use the mesoscale modeling capability of Mars Weather Research and Forecasting (MarsWRF) model to study the sensitivity of the simulated Martian lower atmosphere to differences in the parameterization of the planetary boundary layer (PBL). Characterization of the Martian atmosphere and realistic representation of processes such as mixing of tracers like dust depend on how well the model reproduces the evolution of the PBL structure. MarsWRF is based on the NCAR WRF model and it retains some of the PBL schemes available in the earth version. Published studies have examined the performance of different PBL schemes in NCAR WRF with the help of observations. Currently such assessments are not feasible for Martian atmospheric models due to lack of observations. It is of interest though to study the sensitivity of the model to PBL parameterization. Typically, for standard Martian atmospheric simulations, we have used the Medium Range Forecast (MRF) PBL scheme, which considers a correction term to the vertical gradients to incorporate nonlocal effects. For this study, we have also used two other parameterizations, a non-local closure scheme called Yonsei University (YSU) PBL scheme and a turbulent kinetic energy closure scheme called Mellor- Yamada-Janjic (MYJ) PBL scheme. We will present intercomparisons of the near surface temperature profiles, boundary layer heights, and wind obtained from the different simulations. We plan to use available temperature observations from Mini TES instrument onboard the rovers Spirit and Opportunity in evaluating the model results.
Lu, Chunsong; Liu, Yangang; Zhang, Guang J.; Wu, Xianghua; Endo, Satoshi; Cao, Le; Li, Yueqing; Guo, Xiaohao
2016-02-01
This work examines the relationships of entrainment rate to vertical velocity, buoyancy, and turbulent dissipation rate by applying stepwise principal component regression to observational data from shallow cumulus clouds collected during the Routine AAF [Atmospheric Radiation Measurement (ARM) Aerial Facility] Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) field campaign over the ARM Southern Great Plains (SGP) site near Lamont, Oklahoma. The cumulus clouds during the RACORO campaign simulated using a large eddy simulation (LES) model are also examined with the same approach. The analysis shows that a combination of multiple variables can better represent entrainment ratemore » in both the observations and LES than any single-variable fitting. Three commonly used parameterizations are also tested on the individual cloud scale. A new parameterization is therefore presented that relates entrainment rate to vertical velocity, buoyancy and dissipation rate; the effects of treating clouds as ensembles and humid shells surrounding cumulus clouds on the new parameterization are discussed. Physical mechanisms underlying the relationships of entrainment rate to vertical velocity, buoyancy and dissipation rate are also explored.« less
Lui, Lok Ming; Thiruvenkadam, Sheshadri; Wang, Yalin; Chan, Tony; Thompson, Paul
2008-01-01
In this work, we find meaningful parameterizations of cortical surfaces utilizing prior anatomical information in the form of anatomical landmarks (sulci curves) on the surfaces. Specifically we generate close to conformal parametrizations that also give a shape-based correspondence between the landmark curves. We propose a variational energy that measures the harmonic energy of the parameterization maps, and the shape dissimilarity between mapped points on the landmark curves. The novelty is that the computed maps are guaranteed to give a shape-based diffeomorphism between the landmark curves. We achieve this by intrinsically modelling our search space of maps as flows of smooth vector fields that do not flow across the landmark curves, and by using the local surface geometry on the curves to define a shape measure. Such parameterizations ensure consistent correspondence between anatomical features, ensuring correct averaging and comparison of data across subjects. The utility of our model is demonstrated in experiments on cortical surfaces with landmarks delineated, which show that our computed maps give a shape-based alignment of the sulcal curves without significantly impairing conformality. PMID:18979783
NASA Technical Reports Server (NTRS)
Gary, G. A.
1998-01-01
The reconstruction of the coronal magnetic field is carried out using a perturbation procedure. A set of magnetic field lines generated from magnetogram data is parameterized and then deformed by varying the parameterized values. The coronal fluxtubes associated with this field are adjusted until the correlation between the field lines and the observed coronal loops is maximized. A mathematical formulation is described which ensures (1) that the normal component of the photospheric field remains unchanged, (2) that the field is given in the entire corona, (3) that the field remains divergence free, and (4) that electrical currents are introduced into the field. It is demonstrated that a simple radial parameterization of a potential field, comprising a radial stretching of the field, can provide a match for a simple bipolar active region, AR 7999, which crossed the central meridian on 1996 Nov 26. At a coronal height of 30 km, the resulting magnetic field is a non-force free magnetic field with the maximum Lorentz force being on the order of 2.6 x 10(exp -9) dyn resulting from an electric current density of $0.13 mu A/ sq m. This scheme is an important tool in generating a magnetic field solution consistent with the coronal flux tube observations and the observed photospheric magnetic field.
Refinement, Validation and Application of Cloud-Radiation Parameterization in a GCM
Dr. Graeme L. Stephens
2009-04-30
The research performed under this award was conducted along 3 related fronts: (1) Refinement and assessment of parameterizations of sub-grid scale radiative transport in GCMs. (2) Diagnostic studies that use ARM observations of clouds and convection in an effort to understand the effects of moist convection on its environment, including how convection influences clouds and radiation. This aspect focuses on developing and testing methodologies designed to use ARM data more effectively for use in atmospheric models, both at the cloud resolving model scale and the global climate model scale. (3) Use (1) and (2) in combination with both models and observations of varying complexity to study key radiation feedback Our work toward these objectives thus involved three corresponding efforts. First, novel diagnostic techniques were developed and applied to ARM observations to understand and characterize the effects of moist convection on the dynamical and thermodynamical environment in which it occurs. Second, an in house GCM radiative transfer algorithm (BUGSrad) was employed along with an optimal estimation cloud retrieval algorithm to evaluate the ability to reproduce cloudy-sky radiative flux observations. Assessments using a range of GCMs with various moist convective parameterizations to evaluate the fidelity with which the parameterizations reproduce key observable features of the environment were also started in the final year of this award. The third study area involved the study of cloud radiation feedbacks and we examined these in both cloud resolving and global climate models.
Size-resolved parameterization of primary organic carbon in fresh marine aerosols
Long, Michael S; Keene, William C; Erickson III, David J
2009-12-01
Marine aerosols produced by the bursting of artificially generated bubbles in natural seawater are highly enriched (2 to 3 orders of magnitude based on bulk composition) in marine-derived organic carbon (OC). Production of size-resolved particulate OC was parameterized based on a Langmuir kinetics-type association of OC to bubble plumes in seawater and resulting aerosol as constrained by measurements of aerosol produced from highly productive and oligotrophic seawater. This novel approach is the first to account for the influence of adsorption on the size-resolved association between marine aerosols and OC. Production fluxes were simulated globally with an eight aerosol-size-bin version of the NCAR Community Atmosphere Model (CAM v3.5.07). Simulated number and inorganic sea-salt mass production fell within the range of published estimates based on observationally constrained parameterizations. Because the parameterization does not consider contributions from spume drops, the simulated global mass flux (1.5 x 10{sup 3} Tg y{sup -1}) is near the lower limit of published estimates. The simulated production of aerosol number (2.1 x 10{sup 6} cm{sup -2} s{sup -1}) and OC (49 Tg C y{sup -1}) fall near the upper limits of published estimates and suggest that primary marine aerosols may have greater influences on the physiochemical evolution of the troposphere, radiative transfer and climate, and associated feedbacks on the surface ocean than suggested by previous model studies.
Analysis and parameterization of the combined coalescence, breakup, and evaporation processes
Brown, P.S. Jr.
1993-09-01
A parameterization of raindrop coalescence and breakup has been extended to include evaporation. The parameterization is developed through analysis of accurate numerical solutions of the coalescence/breakup/evaporation equation. Modeled drop size distributions are found to evolve first toward a trimodal form characteristic of the equilibrium distribution that occurs when only collisional processes are at work. With sustained evaporation, the trimodality disappears and a unimodal-type drop size distribution emerges. The results imply that the trimodal form occurs when collisional processes are dominant but that a unimodal distribution prevails as the water mass is reduced. The mass reduction causes collisions to become infrequent and allows evaporation to deplete the small-sized raindrop population. When subjected to continued evaporation, the coalescence/breakup equilibrium itself undergoes a transition from trimodal form, and it is this evolving form toward which all other drop size distributions converge. In the transition, the liquid water content decreases exponentially with a time constant of 300/S s, where S is the saturation deficit; furthermore, the shape of the evaporating distribution is determined by the ratio of liquid water content to the saturation deficit. The parameterization procedure makes use of the analysis results in order to describe system behavior.
X-ray scatter correction in breast tomosynthesis with a precomputed scatter map library
Feng, Steve Si Jia; D’Orsi, Carl J.; Newell, Mary S.; Seidel, Rebecca L.; Patel, Bhavika; Sechopoulos, Ioannis
2014-01-01
Purpose: To develop and evaluate the impact on lesion conspicuity of a software-based x-ray scatter correction algorithm for digital breast tomosynthesis (DBT) imaging into which a precomputed library of x-ray scatter maps is incorporated. Methods: A previously developed model of compressed breast shapes undergoing mammography based on principal component analysis (PCA) was used to assemble 540 simulated breast volumes, of different shapes and sizes, undergoing DBT. A Monte Carlo (MC) simulation was used to generate the cranio-caudal (CC) view DBT x-ray scatter maps of these volumes, which were then assembled into a library. This library was incorporated into a previously developed software-based x-ray scatter correction, and the performance of this improved algorithm was evaluated with an observer study of 40 patient cases previously classified as BI-RADS® 4 or 5, evenly divided between mass and microcalcification cases. Observers were presented with both the original images and the scatter corrected (SC) images side by side and asked to indicate their preference, on a scale from −5 to +5, in terms of lesion conspicuity and quality of diagnostic features. Scores were normalized such that a negative score indicates a preference for the original images, and a positive score indicates a preference for the SC images. Results: The scatter map library removes the time-intensive MC simulation from the application of the scatter correction algorithm. While only one in four observers preferred the SC DBT images as a whole (combined mean score = 0.169 ± 0.37, p > 0.39), all observers exhibited a preference for the SC images when the lesion examined was a mass (1.06 ± 0.45, p < 0.0001). When the lesion examined consisted of microcalcification clusters, the observers exhibited a preference for the uncorrected images (−0.725 ± 0.51, p < 0.009). Conclusions: The incorporation of the x-ray scatter map library into the scatter correction algorithm improves the efficiency
Small angle neutron scattering
NASA Astrophysics Data System (ADS)
Cousin, Fabrice
2015-10-01
Small Angle Neutron Scattering (SANS) is a technique that enables to probe the 3-D structure of materials on a typical size range lying from ˜ 1 nm up to ˜ a few 100 nm, the obtained information being statistically averaged on a sample whose volume is ˜ 1 cm3. This very rich technique enables to make a full structural characterization of a given object of nanometric dimensions (radius of gyration, shape, volume or mass, fractal dimension, specific area…) through the determination of the form factor as well as the determination of the way objects are organized within in a continuous media, and therefore to describe interactions between them, through the determination of the structure factor. The specific properties of neutrons (possibility of tuning the scattering intensity by using the isotopic substitution, sensitivity to magnetism, negligible absorption, low energy of the incident neutrons) make it particularly interesting in the fields of soft matter, biophysics, magnetic materials and metallurgy. In particular, the contrast variation methods allow to extract some informations that cannot be obtained by any other experimental techniques. This course is divided in two parts. The first one is devoted to the description of the principle of SANS: basics (formalism, coherent scattering/incoherent scattering, notion of elementary scatterer), form factor analysis (I(q→0), Guinier regime, intermediate regime, Porod regime, polydisperse system), structure factor analysis (2nd Virial coefficient, integral equations, characterization of aggregates), and contrast variation methods (how to create contrast in an homogeneous system, matching in ternary systems, extrapolation to zero concentration, Zero Averaged Contrast). It is illustrated by some representative examples. The second one describes the experimental aspects of SANS to guide user in its future experiments: description of SANS spectrometer, resolution of the spectrometer, optimization of spectrometer
NASA Astrophysics Data System (ADS)
Kolomiets, Sergey; Gorelik, Andrey
This report is devoted to a discussion of applicability limits of Rayleigh’s scattering model. Implicitly, Rayleigh’s ideas are being used in a wide range of remote sensing applications. To begin with it must be noted that most techniques which have been developed to date for measurements by means of active instruments for remote sensing in case of the target is a set of distributed moving scatters are only hopes, to say so, on measurements per se. The problem is that almost all of such techniques use a priori information about the microstructure of the object of interest during whole measurement session. As one can find in the literature, this approach may happily be applied to systems with identical particles. However, it is not the case with respect to scattering targets that consist of particles of different kind or having a particle size distribution. It must be especially noted that the microstructure of most of such targets changes significantly with time and/or space. Therefore, the true measurement techniques designed to be applicable in such conditions must be not only adaptable in order to take into account a variety of models of an echo interpretation, but also have a well-developed set of clear-cut criteria of applicability and exact means of accuracy estimation. So such techniques will require much more parameters to be measured. In spite of the fact that there is still room for some improvements within classical models and approaches, it is multiwavelength approach that may be seen as the most promising way of development towards obtaining an adequate set of the measured parameters required for true measurement techniques. At the same time, Rayleigh’s scattering is an invariant in regard to a change of the wavelength as it follows from the point of view dominating nowadays. In the light of such an idea, the synergy between multivawelength measurements may be achieved - to a certain extent - by means of the synchronous usage of Rayleigh’s and
Integrated Raman and angular scattering of single biological cells
NASA Astrophysics Data System (ADS)
Smith, Zachary J.
2009-12-01
Raman, or inelastic, scattering and angle-resolved elastic scattering are two optical processes that have found wide use in the study of biological systems. Raman scattering quantitatively reports on the chemical composition of a sample by probing molecular vibrations, while elastic scattering reports on the morphology of a sample by detecting structure-induced coherent interference between incident and scattered light. We present the construction of a multimodal microscope platform capable of gathering both elastically and inelastically scattered light from a 38 mum2 region in both epi- and trans-illumination geometries. Simultaneous monitoring of elastic and inelastic scattering from a microscopic region allows noninvasive characterization of a living sample without the need for exogenous dyes or labels. A sample is illuminated either from above or below with a focused 785 nm TEM00 mode laser beam, with elastic and inelastic scattering collected by two separate measurement arms. The measurements may be made either simultaneously, if identical illumination geometries are used, or sequentially, if the two modalities utilize opposing illumination paths. In the inelastic arm, Stokes-shifted light is dispersed by a spectrograph onto a CCD array. In the elastic scattering collection arm, a relay system images the microscope's back aperture onto a CCD detector array to yield an angle-resolved elastic scattering pattern. Post-processing of the inelastic scattering to remove fluorescence signals yields high quality Raman spectra that report on the sample's chemical makeup. Comparison of the elastically scattered pupil images to generalized Lorenz-Mie theory yields estimated size distributions of scatterers within the sample. In this thesis we will present validations of the IRAM instrument through measurements performed on single beads of a few microns in size, as well as on ensembles of sub-micron particles of known size distributions. The benefits and drawbacks of the
Angle resolved scatter measurement of bulk scattering in transparent ceramics
NASA Astrophysics Data System (ADS)
Sharma, Saurabh; Miller, J. Keith; Shori, Ramesh K.; Goorsky, Mark S.
2015-02-01
Bulk scattering in polycrystalline laser materials (PLM), due to non-uniform refractive index across the bulk, is regarded as the primary loss mechanism leading to degradation of laser performance with higher threshold and lower output power. The need for characterization techniques towards identifying bulk scatter and assessing the quality. Assessment of optical quality and the identification of bulk scatter have been by simple visual inspection of thin samples of PLMs, thus making the measurements highly subjective and inaccurate. Angle Resolved Scatter (ARS) measurement allows for the spatial mapping of scattered light at all possible angles about a sample, mapping the intensity for both forward scatter and back-scatter regions. The cumulative scattered light intensity, in the forward scatter direction, away from the specular beam is used for the comparison of bulk scattering between samples. This technique employ the detection of scattered light at all angles away from the specular beam directions and represented as a 2-D polar map. The high sensitivity of the ARS technique allows us to compare bulk scattering in different PLM samples which otherwise had similar transmitted beam wavefront distortions.
Thermodynamic parameterization
NASA Astrophysics Data System (ADS)
Gorban, Alexander N.; Karlin, Iliya V.
1992-12-01
A new method of succesive construction of a solution is developed for problems of strongly nonequilibrium Boltzmann kinetics beyond normal solutions. Firstly, the method provides dynamic equations for any manifold of distributions where one looks for an approximate solution. Secondly, it gives a successive procedure of obtaining corrections to these approximations. The method requires meither small parameters, nor strong restrictions upon the initial approximation; it involves solutions of linear problems. It is concordant with the H-theorem at every step. In particular, for the Tamm-Mott-Smith approximation, dynamic equations are obtained, an expansion for the strong shock is introduced, and a linear equation for the first correction is found.
Parameterization and comparative evaluation of the CCN number concentration on Mt. Huang, China
NASA Astrophysics Data System (ADS)
Fang, Shasha; Han, Yongxiang; Chen, Kui; Lu, Chunsong; Yin, Yan; Tan, Haobo; Wang, Jin
2016-11-01
Quantifying regional CCN concentration is important for reliable estimations of aerosol indirect effects. Based on observational data of the number concentrations of total aerosol (NCN) and cloud condensation nuclei (NCCN), particle number size distribution (PNSD) and, size-resolved activation ratio (SRAR) obtained on Mt. Huang in southeast China from September 19 to October 11, 2012, seven parameterization schemes are used to calculate NCCN employing CCN spectra, bulk activation ratio, cut-off diameter and SRAR. The calculations and the observations are compared and analyzed at four supersaturations (S) from 0.109% to 0.67%. Results show that (1) the parameterization using the average cut-off diameter Dm, which is derived from the various measured PNSD and NCCN, provides the best estimate of NCCN, with coefficient of determination, R2 = 0.70-0.90 and NCCN,cal/NCCN,obs = 0.92-1.11, followed by the method of combining an average size-resolved activation curve with the PNSD, with R2 = 0.71-0.91 and NCCN,cal/NCCN,obs = 0.71-0.91; average D50 together with the PNSD also provides a rational scheme for NCCN prediction, with NCCN,cal/NCCN,obs = 0.86-0.94 and R2 = 0.70-0.89; (2) the method of parameterizing CCN spectra, though straightforward, has limits under polluted conditions. Reasonable NCCN estimate could only be obtained at high S (R2 ≥ 0.85 at S = 0.39% and 0.67%). (3) For the method employing the bulk activation ratio ARB(S), NCCN are substantially overestimated by using total mode-based ARB(S) (NCCN,cal/NCCN,obs = 0.94-1.39, R2 = 0.17-0.67), while applying ammonium sulfate-based ARB(S) yields improved CCN predictions (NCCN,cal/NCCN,obs = 0.91-1.11, R2 = 0.70-0.91). In southern China, when determining the parameterization schemes in climate models, it is first recommended to use the method of average cut-off diameter or SRAR, with the various measured PNSD to predict NCCN. Besides, the method using ammonium sulfate-based ARB(S) and parameterizing CCN spectra
Govindasamy, B; Duffy, P
2002-04-12
Typical state of the art atmospheric general circulation models used in climate change studies have horizontal resolution of approximately 300 km. As computing power increases, many climate modeling groups are working toward enhancing the resolution of global models. An important issue that arises when resolution of a model is changed is whether cloud and convective parameterizations, which were developed for use at coarser resolutions, will need to be reformulated or re-tuned. We propose to investigate this issue and specifically cloud statistics using ARM data. The data streams produced by highly instrumented sections of Cloud and Radiation Testbeds (CART) of ARM program will provide a significant aid in the evaluation of cloud and convection parameterization in high-resolution models. Recently, we have performed multiyear global-climate simulations at T170 and T239 resolutions, corresponding to grid cell sizes of 0.7{sup 0} and 0.5{sup 0} respectively, using the NCAR Community Climate Model. We have also a performed climate change simulation at T170. On the scales of a T42 grid cell (300 km) and larger, nearly all quantities we examined in T170 simulation agree better with observations in terms of spatial patterns than do results in a comparable simulation at T42. Increasing the resolution to T239 brings significant further improvement. At T239, the high-resolution model grid cells approach the dimensions of the highly instrumented sections of ARM Cloud and Radiation Testbed (CART) sites. We propose to form a cloud climatology using ARM data for its CART sites and evaluate cloud statistics of the NCAR Community Atmosphere Model (CAM) at higher resolutions over those sites using this ARM cloud climatology. We will then modify the physical parameterizations of CAM for better agreement with ARM data. We will work closely with NCAR in modifying the parameters in cloud and convection parameterizations for the high-resolution model. Our proposal to evaluate the cloud
NASA Astrophysics Data System (ADS)
Argüeso, D.; Hidalgo-Muñoz, J. M.; Gámiz-Fortis, S. R.; Esteban-Parra, M. J.; Castro-Díez, Y.
2009-04-01
An evaluation of MM5 mesoscale model sensitivity to different parameterizations schemes is presented in terms of temperature and precipitation for high-resolution integrations over Andalusia (South of Spain). As initial and boundary conditions ERA-40 Reanalysis data are used. Two domains were used, a coarse one with dimensions of 55 by 60 grid points with spacing of 30 km and a nested domain of 48 by 72 grid points grid spaced 10 km. Coarse domain fully covers Iberian Peninsula and Andalusia fits loosely in the finer one. In addition to parameterization tests, two dynamical downscaling techniques have been applied in order to examine the influence of initial conditions on RCM long-term studies. Regional climate studies usually employ continuous integration for the period under survey, initializing atmospheric fields only at the starting point and feeding boundary conditions regularly. An alternative approach is based on frequent re-initialization of atmospheric fields; hence the simulation is divided in several independent integrations. Altogether, 20 simulations have been performed using varying physics options, of which 4 were fulfilled applying the re-initialization technique. Surface temperature and accumulated precipitation (daily and monthly scale) were analyzed for a 5-year period covering from 1990 to 1994. Results have been compared with daily observational data series from 110 stations for temperature and 95 for precipitation Both daily and monthly average temperatures are generally well represented by the model. Conversely, daily precipitation results present larger deviations from observational data. However, noticeable accuracy is gained when comparing with monthly precipitation observations. There are some especially conflictive subregions where precipitation is scarcely captured, such as the Southeast of the Iberian Peninsula, mainly due to its extremely convective nature. Regarding parameterization schemes performance, every set provides very
NASA Astrophysics Data System (ADS)
Breil, Marcus; Schädler, Gerd
2016-04-01
The aim of the german research program MiKlip II is the development of an operational climate prediction system that can provide reliable forecasts on a decadal time scale. Thereby, one goal of MiKlip II is to investigate the feasibility of regional climate predictions. Results of recent studies indicate that the regional climate is significantly affected by the interactions between the soil, the vegetation and the atmosphere. Thus, within the framework of MiKlip II a workpackage was established to assess the impact of these interactions on the regional decadal climate predictability. In a Regional Climate Model (RCM) the soil-vegetation-atmosphere interactions are represented in a Land Surface Model (LSM). Thereby, the LSM describes the current state of the land surface by calculating the soil temperature, the soil water content and the turbulent heat fluxes, serving the RCM as lower boundary condition. To be able to solve the corresponding equations, soil and vegetation processes are parameterized within the LSM. Such parameterizations are mainly derived from observations. But in most cases observations are temporally and spatially limited and consequently not able to represent the diversity of nature completely. Thus, soil and vegetation parameterizations always exhibit a certain degree of uncertainty. In the presented study, the uncertainties within a LSM are assessed by stochastic variations of the relevant parameterizations in VEG3D, a LSM developed at the Karlsruhe Institute of Technology (KIT). In a first step, stand-alone simulations of VEG3D are realized with varying soil and vegetation parameters, to identify sensitive model parameters. In a second step, VEG3D is coupled to the RCM COSMO-CLM. With this new model system regional decadal hindcast simulations, driven by global simulations of the Max-Planck-Institute for Meteorology Earth System Model (MPI-ESM), are performed for the CORDEX-EU domain in a resolution of 0.22°. The identified sensitive model
A new parameterization of the post-fire snow albedo effect
NASA Astrophysics Data System (ADS)
Gleason, K. E.; Nolin, A. W.
2013-12-01
Mountain snowpack serves as an important natural reservoir of water: recharging aquifers, sustaining streams, and providing important ecosystem services. Reduced snowpacks and earlier snowmelt have been shown to affect fire size, frequency, and severity in the western United States. In turn, wildfire disturbance affects patterns of snow accumulation and ablation by reducing canopy interception, increasing turbulent fluxes, and modifying the surface radiation balance. Recent work shows that after a high severity forest fire, approximately 60% more solar radiation reaches the snow surface due to the reduction in canopy density. Also, significant amounts of pyrogenic carbon particles and larger burned woody debris (BWD) are shed from standing charred trees, which concentrate on the snowpack, darken its surface, and reduce snow albedo by 50% during ablation. Although the post-fire forest environment drives a substantial increase in net shortwave radiation at the snowpack surface, driving earlier and more rapid melt, hydrologic models do not explicitly incorporate forest fire disturbance effects to snowpack dynamics. The objective of this study was to parameterize the post-fire snow albedo effect due to BWD deposition on snow to better represent forest fire disturbance in modeling of snow-dominated hydrologic regimes. Based on empirical results from winter experiments, in-situ snow monitoring, and remote sensing data from a recent forest fire in the Oregon High Cascades, we characterized the post-fire snow albedo effect, and developed a simple parameterization of snowpack albedo decay in the post-fire forest environment. We modified the recession coefficient in the algorithm: α = α0 + K exp (-nr) where α = snowpack albedo, α0 = minimum snowpack albedo (≈0.4), K = constant (≈ 0.44), -n = number of days since last major snowfall, r = recession coefficient [Rohrer and Braun, 1994]. Our parameterization quantified BWD deposition and snow albedo decay rates and
Further Evaluation of an Urban Canopy Parameterization using VTMX and Urban 2000 Data
Chin, H S; Leach, M J
2004-06-04
Almost two-thirds of the U.S. population live in urbanized areas occupying less than 2% of the landmass. Similar statistics of urbanization exists in other parts of the world. With the rapid growth of the world population, urbanization appears to be an important issue on environmental and health aspects. As a result, the interaction between the urban region and atmospheric processes becomes a very complex problem. Further understanding of this interaction via the surface and/or atmosphere is of importance to improve the weather forecast, and to minimize the loss caused by the weather-related events, or even by the chemical-biological threat. To this end, Brown and Willaims, (1998) first developed an urban canopy scheme to parameterize the urban infrastructure effect. This parameterization accounts for the effects of drag, turbulent production, radiation balance, and anthropogenic and rooftop heating. Further modification was made and tested in our recent sensitivity study for an idealized case using a mesoscale model. Results indicated that the addition of the rooftop surface energy equation enables this parameterization to more realistically simulate the urban infrastructure impact (Chin et al., 2000). To further improve the representation of the urban effect in the mesoscale model, the USGS land-use data with different resolutions (200 and 30 meters) are adopted to derive the urban parameters via a look-up table approach (Leone et al., 2002; Chin et al., 2004). This approach can provide us the key parameters for urban infrastructure and urban surface characteristics to drive the urban canopy parameterization with geographic and temporal dependence. These urban characteristics include urban fraction, roof fraction, building height, anthropogenic heating, surface albedo, surface wetness, and surface roughness. The objective of this study is to evaluate the modified urban canopy parameterization (UCP) with the observed measurements. Another objective is to
Effects of cloud parameterization on the simulation of climate changes in the GISS GCM
Yao, M.S.; Del Genio, A.D.
1999-03-01
Climate changes obtained from five doubled CO{sub 2} experiments with different parameterizations of large-scale clouds and moist convection are studied by use of the Goddard Institute for Space Studies (GISS) GCM at 4{degree} lat x 5{degree} long resolution. The baseline for the experiments is GISS Model II, which uses a diagnostic cloud scheme with fixed optical properties and a convection scheme with fixed cumulus mass fluxes and no downdrafts. The global and annual mean surface air temperature change ({Delta}T{sub s}) of 4.2 C obtained by Hansen et al. using the Model II physics at 8{degree} lat x 10{degree} long resolution is reduced to 3.55 C at the finer resolution. This is due to a significant reduction of tropical cirrus clouds in the warmer climate when a finer resolution is used, despite the fact that the relative humidity increases there with a doubling of CO{sub 2}. When the new moist convection parameterization of Del Genio and Yao and prognostic large-scale cloud parameterization of Del Genio et al. are used, {Delta}T{sub s} is reduced to 3.09 C from 3.55 C. This is the net result of the inclusion of the feedback of cloud optical thickness and phase change of cloud water, and the presence of areally extensive cumulus anvil clouds. Without the optical thickness feedback, {Delta}T{sub s} is further reduced to 2.74 C, suggesting that this feedback is positive overall. Without anvil clouds, {Delta}T{sub s} is increased from 3.09 to 3.7 C, suggesting that anvil clouds of large optical thickness reduce the climate sensitivity. The net effect of using the new large-scale cloud parameterization without including the detrainment of convective cloud water is a slight increase of {Delta}T{sub s} from 3.56 to 3.7 C. The net effect of using the new moist convection parameterization without anvil clouds is insignificant.
NASA Astrophysics Data System (ADS)
Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo
2016-04-01
The effects of the propagation and breaking of atmospheric gravity waves have long been considered crucial for their impact on the circulation, especially in the stratosphere and mesosphere, between heights of 10 and 110 km. These waves, that in the Earth's atmosphere originate from surface orography (OGWs) or from transient (nonorographic) phenomena such as fronts and convective processes (NOGWs), have horizontal wavelengths between 10 and 1000 km, vertical wavelengths of several km, and frequencies spanning from minutes to hours. Orographic and nonorographic GWs must be accounted for in climate models to obtain a realistic simulation of the stratosphere in both hemispheres, since they can have a substantial impact on circulation and temperature, hence an important role in ozone chemistry for chemistry-climate models. Several types of parameterization are currently employed in models, differing in the formulation and for the values assigned to parameters, but the common aim is to quantify the effect of wave breaking on large-scale wind and temperature patterns. In the last decade, both global observations from satellite-borne instruments and the outputs of very high resolution climate models provided insight on the variability and properties of gravity wave field, and these results can be used to constrain some of the empirical parameters present in most parameterization scheme. A feature of the NOGW forcing that clearly emerges is the intermittency, linked with the nature of the sources: this property is absent in the majority of the models, in which NOGW parameterizations are uncoupled with other atmospheric phenomena, leading to results which display lower variability compared to observations. In this work, we analyze the climate simulated in AMIP runs of the MAECHAM5 model, which uses the Hines NOGW parameterization and with a fine vertical resolution suitable to capture the effects of wave-mean flow interaction. We compare the results obtained with two
Proton Nucleus Elastic Scattering Data.
1993-08-18
Version 00 The Proton Nucleus Elastic Scattering Data file PNESD contains the numerical data and the related bibliography for the differential elastic cross sections, polarization and integral nonelastic cross sections for elastic proton-nucleus scattering.
Interface scattering in polycrystalline thermoelectrics
Popescu, Adrian; Haney, Paul M.
2014-03-28
We study the effect of electron and phonon interface scattering on the thermoelectric properties of disordered, polycrystalline materials (with grain sizes larger than electron and phonons' mean free path). Interface scattering of electrons is treated with a Landauer approach, while that of phonons is treated with the diffuse mismatch model. The interface scattering is embedded within a diffusive model of bulk transport, and we show that, for randomly arranged interfaces, the overall system is well described by effective medium theory. Using bulk parameters similar to those of PbTe and a square barrier potential for the interface electron scattering, we identify the interface scattering parameters for which the figure of merit ZT is increased. We find the electronic scattering is generally detrimental due to a reduction in electrical conductivity; however, for sufficiently weak electronic interface scattering, ZT is enhanced due to phonon interface scattering.
Barone, Vincenzo; Cacelli, Ivo; De Mitri, Nicola; Licari, Daniele; Monti, Susanna; Prampolini, Giacomo
2013-03-21
The Joyce program is augmented with several new features, including the user friendly Ulysses GUI, the possibility of complete excited state parameterization and a more flexible treatment of the force field electrostatic terms. A first validation is achieved by successfully comparing results obtained with Joyce2.0 to literature ones, obtained for the same set of benchmark molecules. The parameterization protocol is also applied to two other larger molecules, namely nicotine and a coumarin based dye. In the former case, the parameterized force field is employed in molecular dynamics simulations of solvated nicotine, and the solute conformational distribution at room temperature is discussed. Force fields parameterized with Joyce2.0, for both the dye's ground and first excited electronic states, are validated through the calculation of absorption and emission vertical energies with molecular mechanics optimized structures. Finally, the newly implemented procedure to handle polarizable force fields is discussed and applied to the pyrimidine molecule as a test case. PMID:23389748
Barone, Vincenzo; Cacelli, Ivo; De Mitri, Nicola; Licari, Daniele; Monti, Susanna; Prampolini, Giacomo
2013-03-21
The Joyce program is augmented with several new features, including the user friendly Ulysses GUI, the possibility of complete excited state parameterization and a more flexible treatment of the force field electrostatic terms. A first validation is achieved by successfully comparing results obtained with Joyce2.0 to literature ones, obtained for the same set of benchmark molecules. The parameterization protocol is also applied to two other larger molecules, namely nicotine and a coumarin based dye. In the former case, the parameterized force field is employed in molecular dynamics simulations of solvated nicotine, and the solute conformational distribution at room temperature is discussed. Force fields parameterized with Joyce2.0, for both the dye's ground and first excited electronic states, are validated through the calculation of absorption and emission vertical energies with molecular mechanics optimized structures. Finally, the newly implemented procedure to handle polarizable force fields is discussed and applied to the pyrimidine molecule as a test case.
An intermediate process-based fire parameterization in Dynamic Global Vegetation Model
NASA Astrophysics Data System (ADS)
Li, F.; Zeng, X.
2011-12-01
An intermediate process-based fire parameterization has been developed for global fire simulation. It fits the framework of Dynamic Global Vegetation Model (DGVM) which has been a pivot component in Earth System Model (ESM). The fire parameterization comprises three parts: fire occurrence, fire spread, and fire impact. In the first part, the number of fires is determined by ignition counts due to anthropogenic and natural causes and three constraints: fuel load, fuel moisture, and human suppression. Human caused ignition and suppression is explicitly considered as a nonlinear function of population density. The fire counts rather than fire occurrence probability is estimated to avoid underestimating the observed high burned area fraction in tropical savannas where fire occurs frequently. In the second part, post-fire region is assumed to be elliptical in shape with the wind direction along the major axis and the point of ignition at one of the foci. Burned area is determined by fire spread rate,fire duration, and fire counts. Mathematical characteristics of ellipse and some mathematical derivations are used to avoid redundant and unreasonable equations and assumptions in the CTEM-FIRE and make the parameterization equations self-consistently. In the third part, the impact of fire on vegetation component and structure, carbon cycle, trace gases and aerosol emissions are taken into account. The new estimates of trace gas and aerosol emissions due to biomass burning offers an interface with aerosol and atmospheric chemistry model in ESMs. Furthermore, in the new fire parameterization, fire occurrence part and fire spread part can be updated hourly or daily, and fire impact part can be updated daily, monthly, or annually. Its flexibility in selection of time-step length makes it easily applied to various DGVMs. The improved Community Land Model 3.0's Dynamic Global Vegetation Model (CLM-DGVM) is used as the model platform to assess the global performance of the new
Construction of an integrated Raman- and angular-scattering microscope.
Smith, Zachary J; Berger, Andrew J
2009-04-01
We report on the construction of a multimodal microscope platform capable of gathering both elastically and inelastically scattered light from a 38 mum(2) region in both epi- and transillumination geometries. Simultaneous monitoring of elastic and inelastic scattering from a microscopic region allows noninvasive characterization of the chemistry and morphology of a living sample without the need for exogenous dyes or labels, thus allowing measurements to be made longitudinally in time on the same sample as it evolves naturally. A sample is illuminated either from above or below with a focused 785 nm TEM(00) mode laser beam, with elastic and inelastic scattering collected by two separate measurement arms. The measurements may be made either simultaneously, if identical illumination geometries are used, or sequentially, if the two modalities utilize opposing illumination paths. In the inelastic arm, Stokes-shifted light is dispersed by a spectrograph onto a charge-coupled device (CCD) array. In the elastic scattering collection arm, a relay system images the microscope's back aperture onto a CCD array. Postprocessing of the inelastic scattering to remove fluorescence signals yields high quality Raman spectra that report on the sample's chemical makeup. Comparison of the elastically scattered pupil images to generalized Lorenz-Mie theory yields estimated size distributions of scatterers within the sample. PMID:19405678
NASA Technical Reports Server (NTRS)
Bretherton, Christopher S.
1998-01-01
The goal of this project was to compare observations of marine and arctic boundary layers with (i) parameterization systems used in climate and weather forecast models, and (ii) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type and thickness as functions of large scale conditions that are predicted by global climate models.
Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles
Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; Wall, Michael E.; Jackson, Colin J.; Sauter, Nicholas K.; Adams, Paul D.; Urzhumtsev, Alexandre; Fraser, James S.
2015-07-28
Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier'smore » equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.« less
Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles
Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; Wall, Michael E.; Jackson, Colin J.; Sauter, Nicholas K.; Adams, Paul D.; Urzhumtsev, Alexandre; Fraser, James S.
2015-01-01
Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier’s equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis. PMID:26249347
NASA Astrophysics Data System (ADS)
Werneth, Charles; Maung Maung, Khin; Norbury, John
2012-10-01
Non-relativistic multiple scattering theories (NRMST) are formulated by separating the unperturbed Hamiltonian from the interaction and writing the Lippmann-Schwinger equation as an infinite series in the multiple sums of pseudo two-body operators, known as the Watson tau-operators. The advantage of using the multiple scattering theory (MST) is that the pseudo two-body operators are often well approximated by free two-body nucleon-nucleon operators, which are obtained from parameterizations of experimental data. Relativistic theories are needed to properly describe the production of new particles, such as pions, from nucleus-nucleus collisions. Relativistic multiple scattering theories (RMST) have been developed for nucleon-nucleus scattering; however, no RMST for nucleus-nucleus scattering has yet been derived.footnotetextMaung K M, Norbury J W, and Coleman T 2007 J. Phys. G 34 1861. The purpose of this research is to derive an RMST for nucleus-nucleus scattering and to include delta degrees of freedom in the interaction, the minimum requirement for pion production.
Predicting X-ray diffuse scattering from translation–libration–screw structural ensembles
Van Benschoten, Andrew H.; Afonine, Pavel V.; Terwilliger, Thomas C.; Wall, Michael E.; Jackson, Colin J.; Sauter, Nicholas K.; Adams, Paul D.; Urzhumtsev, Alexandre; Fraser, James S.
2015-07-28
Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier's equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation–libration–screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.
Predicting X-ray diffuse scattering from translation-libration-screw structural ensembles.
Van Benschoten, Andrew H; Afonine, Pavel V; Terwilliger, Thomas C; Wall, Michael E; Jackson, Colin J; Sauter, Nicholas K; Adams, Paul D; Urzhumtsev, Alexandre; Fraser, James S
2015-08-01
Identifying the intramolecular motions of proteins and nucleic acids is a major challenge in macromolecular X-ray crystallography. Because Bragg diffraction describes the average positional distribution of crystalline atoms with imperfect precision, the resulting electron density can be compatible with multiple models of motion. Diffuse X-ray scattering can reduce this degeneracy by reporting on correlated atomic displacements. Although recent technological advances are increasing the potential to accurately measure diffuse scattering, computational modeling and validation tools are still needed to quantify the agreement between experimental data and different parameterizations of crystalline disorder. A new tool, phenix.diffuse, addresses this need by employing Guinier's equation to calculate diffuse scattering from Protein Data Bank (PDB)-formatted structural ensembles. As an example case, phenix.diffuse is applied to translation-libration-screw (TLS) refinement, which models rigid-body displacement for segments of the macromolecule. To enable the calculation of diffuse scattering from TLS-refined structures, phenix.tls_as_xyz builds multi-model PDB files that sample the underlying T, L and S tensors. In the glycerophosphodiesterase GpdQ, alternative TLS-group partitioning and different motional correlations between groups yield markedly dissimilar diffuse scattering maps with distinct implications for molecular mechanism and allostery. These methods demonstrate how, in principle, X-ray diffuse scattering could extend macromolecular structural refinement, validation and analysis.
Scattering fidelity in elastodynamics
NASA Astrophysics Data System (ADS)
Gorin, T.; Seligman, T. H.; Weaver, R. L.
2006-01-01
The recent introduction of the concept of scattering fidelity causes us to revisit the experiment by Lobkis and Weaver [Phys. Rev. Lett. 90, 254302 (2003)]. There, the “distortion” of the coda of an acoustic signal is measured under temperature changes. This quantity is, in fact, the negative logarithm of scattering fidelity. We reanalyze their experimental data for two samples, and we find good agreement with random matrix predictions for the standard fidelity. Usually, one may expect such an agreement for chaotic systems, only. While the first sample may indeed be assumed chaotic, for the second sample, a perfect cuboid, such an agreement is surprising. For the first sample, the random matrix analysis yields perturbation strengths compatible with semiclassical predictions. For the cuboid, the measured perturbation strengths are by a common factor of (5)/(3) too large. Apart from that, the experimental curves for the distortion are well reproduced.
Coherent Scatter Imaging Measurements
NASA Astrophysics Data System (ADS)
Ur Rehman, Mahboob
In conventional radiography, anatomical information of the patients can be obtained, distinguishing different tissue types, e.g. bone and soft tissue. However, it is difficult to obtain appreciable contrast between two different types of soft tissues. Instead, coherent x-ray scattering can be utilized to obtain images which can differentiate between normal and cancerous cells of breast. An x-ray system using a conventional source and simple slot apertures was tested. Materials with scatter signatures that mimic breast cancer were buried in layers of fat of increasing thickness and imaged. The result showed that the contrast and signal to noise ratio (SNR) remained high even with added fat layers and short scan times.
Scattering problems in elastodynamics
NASA Astrophysics Data System (ADS)
Diatta, Andre; Kadic, Muamer; Wegener, Martin; Guenneau, Sebastien
2016-09-01
In electromagnetism, acoustics, and quantum mechanics, scattering problems can routinely be solved numerically by virtue of perfectly matched layers (PMLs) at simulation domain boundaries. Unfortunately, the same has not been possible for general elastodynamic wave problems in continuum mechanics. In this Rapid Communication, we introduce a corresponding scattered-field formulation for the Navier equation. We derive PMLs based on complex-valued coordinate transformations leading to Cosserat elasticity-tensor distributions not obeying the minor symmetries. These layers are shown to work in two dimensions, for all polarizations, and all directions. By adaptative choice of the decay length, the deep subwavelength PMLs can be used all the way to the quasistatic regime. As demanding examples, we study the effectiveness of cylindrical elastodynamic cloaks of the Cosserat type and approximations thereof.
Syzygies probing scattering amplitudes
NASA Astrophysics Data System (ADS)
Chen, Gang; Liu, Junyu; Xie, Ruofei; Zhang, Hao; Zhou, Yehao
2016-09-01
We propose a new efficient algorithm to obtain the locally minimal generating set of the syzygies for an ideal, i.e. a generating set whose proper subsets cannot be generating sets. Syzygy is a concept widely used in the current study of scattering amplitudes. This new algorithm can deal with more syzygies effectively because a new generation of syzygies is obtained in each step and the irreducibility of this generation is also verified in the process. This efficient algorithm can also be applied in getting the syzygies for the modules. We also show a typical example to illustrate the potential application of this method in scattering amplitudes, especially the Integral-By-Part(IBP) relations of the characteristic two-loop diagrams in the Yang-Mills theory.
Acoustic bubble removal method
NASA Technical Reports Server (NTRS)
Trinh, E. H.; Elleman, D. D.; Wang, T. G. (Inventor)
1983-01-01
A method is described for removing bubbles from a liquid bath such as a bath of molten glass to be used for optical elements. Larger bubbles are first removed by applying acoustic energy resonant to a bath dimension to drive the larger bubbles toward a pressure well where the bubbles can coalesce and then be more easily removed. Thereafter, submillimeter bubbles are removed by applying acoustic energy of frequencies resonant to the small bubbles to oscillate them and thereby stir liquid immediately about the bubbles to facilitate their breakup and absorption into the liquid.
Wang, J C; Krazmien, R J; Dahlheim, C E; Patel, B
1986-11-01
Results of an anthralin stain removal study on white 65% polyester/35% cotton, white 100% polyester, white 100% cotton, a white shower curtain, white tile with crevice, and white ceramic shower tile are reported. An optimum stain removal technic was developed by using a 10-minute soak in full-strength chlorine bleach (Good Measure or Clorox) followed by a water rinse and air drying. This technic completely removed all stains of 24-hour duration from the test fabrics. The stain removal test on shower curtains, floor tiles, and ceramic shower tiles was also discussed.
Vernon, M.F.
1983-07-01
The molecular-beam technique has been used in three different experimental arrangements to study a wide range of inter-atomic and molecular forces. Chapter 1 reports results of a low-energy (0.2 kcal/mole) elastic-scattering study of the He-Ar pair potential. The purpose of the study was to accurately characterize the shape of the potential in the well region, by scattering slow He atoms produced by expanding a mixture of He in N/sub 2/ from a cooled nozzle. Chapter 2 contains measurements of the vibrational predissociation spectra and product translational energy for clusters of water, benzene, and ammonia. The experiments show that most of the product energy remains in the internal molecular motions. Chapter 3 presents measurements of the reaction Na + HCl ..-->.. NaCl + H at collision energies of 5.38 and 19.4 kcal/mole. This is the first study to resolve both scattering angle and velocity for the reaction of a short lived (16 nsec) electronic excited state. Descriptions are given of computer programs written to analyze molecular-beam expansions to extract information characterizing their velocity distributions, and to calculate accurate laboratory elastic-scattering differential cross sections accounting for the finite apparatus resolution. Experimental results which attempted to determine the efficiency of optically pumping the Li(2/sup 2/P/sub 3/2/) and Na(3/sup 2/P/sub 3/2/) excited states are given. A simple three-level model for predicting the steady-state fraction of atoms in the excited state is included.
Dynamic light scattering microscopy
NASA Astrophysics Data System (ADS)
Dzakpasu, Rhonda
An optical microscope technique, dynamic light scattering microscopy (DLSM) that images dynamically scattered light fluctuation decay rates is introduced. Using physical optics we show theoretically that within the optical resolution of the microscope, relative motions between scattering centers are sufficient to produce significant phase variations resulting in interference intensity fluctuations in the image plane. The time scale for these intensity fluctuations is predicted. The spatial coherence distance defining the average distance between constructive and destructive interference in the image plane is calculated and compared with the pixel size. We experimentally tested DLSM on polystyrene latex nanospheres and living macrophage cells. In order to record these rapid fluctuations, on a slow progressive scan CCD camera, we used a thin laser line of illumination on the sample such that only a single column of pixels in the CCD camera is illuminated. This allowed the use of the rate of the column-by-column readout transfer process as the acquisition rate of the camera. This manipulation increased the data acquisition rate by at least an order of magnitude in comparison to conventional CCD cameras rates defined by frames/s. Analysis of the observed fluctuations provides information regarding the rates of motion of the scattering centers. These rates, acquired from each position on the sample are used to create a spatial map of the fluctuation decay rates. Our experiments show that with this technique, we are able to achieve a good signal-to-noise ratio and can monitor fast intensity fluctuations, on the order of milliseconds. DLSM appears to provide dynamic information about fast motions within cells at a sub-optical resolution scale and provides a new kind of spatial contrast.
Calculating scattering amplitudes efficiently
Dixon, L.
1996-01-01
We review techniques for more efficient computation of perturbative scattering amplitudes in gauge theory, in particular tree and one- loop multi-parton amplitudes in QCD. We emphasize the advantages of (1) using color and helicity information to decompose amplitudes into smaller gauge-invariant pieces, and (2) exploiting the analytic properties of these pieces, namely their cuts and poles. Other useful tools include recursion relations, special gauges and supersymmetric rearrangements. 46 refs., 11 figs.
NASA Astrophysics Data System (ADS)
Vernon, M. F.
1983-07-01
The molecular-beam technique has been used in three different experimental arrangements to study a wide range of inter-atomic and molecular forces. Chapter 1 reports results of a low-energy (0.2 kcal/mole) elastic-scattering study of the He-Ar pair potential. The purpose of the study was to accurately characterize the shape of the potential in the well region, by scattering slow He atoms produced by expanding a mixture of He in N2 from a cooled nozzle. Chapter 2 contains measurements of the vibrational predissociation spectra and product translational energy for clusters of water, benzene, and ammonia. The experiments show that most of the product energy remains in the internal molecular motions. Chapter 3 presents measurements of the reaction Na + HC1 (FEMALE) NAC1 + H at collision energies of 5.38 and 19.4 kcal/mole. This is the first study to resolve both scattering angle and velocity for the reaction of a short lived (16 nsec) electronic excited state. Descriptions are given of computer programs written to analyze molecular-beam expansions to extract information characterizing their velocity distributions, and to calculate accurate laboratory elastic-scattering differential cross sections accounting for the finite apparatus resolution. Experimental results which attempted to determine the efficiency of optically pumping the Li(2(2)P/sub 3/2/) and Na(3(2)P/sub 3/2) excited states are given. A simple three-level model for predicting the steady-state fraction of atoms in the excited state is included.
Concurrent electromagnetic scattering analysis
NASA Technical Reports Server (NTRS)
Patterson, Jean E.; Cwik, Tom; Ferraro, Robert D.; Jacobi, Nathan; Liewer, Paulett C.; Lockhart, Thomas G.; Lyzenga, Gregory A.; Parker, Jay
1989-01-01
The computational power of the hypercube parallel computing architecture is applied to the solution of large-scale electromagnetic scattering and radiation problems. Three analysis codes have been implemented. A Hypercube Electromagnetic Interactive Analysis Workstation was developed to aid in the design and analysis of metallic structures such as antennas and to facilitate the use of these analysis codes. The workstation provides a general user environment for specification of the structure to be analyzed and graphical representations of the results.
Neutron scattering in Australia
Knott, R.B.
1994-12-31
Neutron scattering techniques have been part of the Australian scientific research community for the past three decades. The High Flux Australian Reactor (HIFAR) is a multi-use facility of modest performance that provides the only neutron source in the country suitable for neutron scattering. The limitations of HIFAR have been recognized and recently a Government initiated inquiry sought to evaluate the future needs of a neutron source. In essence, the inquiry suggested that a delay of several years would enable a number of key issues to be resolved, and therefore a more appropriate decision made. In the meantime, use of the present source is being optimized, and where necessary research is being undertaken at major overseas neutron facilities either on a formal or informal basis. Australia has, at present, a formal agreement with the Rutherford Appleton Laboratory (UK) for access to the spallation source ISIS. Various aspects of neutron scattering have been implemented on HIFAR, including investigations of the structure of biological relevant molecules. One aspect of these investigations will be presented. Preliminary results from a study of the interaction of the immunosuppressant drug, cyclosporin-A, with reconstituted membranes suggest that the hydrophobic drug interdigitated with lipid chains.
NASA Astrophysics Data System (ADS)
Mkrtchyan, Arthur; Albayrak, Ibrahim; Horn, Tanja; Nadel-Turonski, Pawel
2015-04-01
Deeply Virtual Comtpon Scattering (DVCS) is deemed the simplest and cleanest way to access the Generalized Parton Distributions (GPDs) of the nucleon. The DVCS process interferes with the Bethe-Heitler process allowing one to access the DVCS amplitudes. The imaginary part of the Compton amplitude is now relatively well understood, primarily through measurements of DVCS. However, much less is known about the real part of the amplitude. Time-like Compton Scattering (TCS) is the inverse process of DVCS and provides a new and promising way for probing the real part of the amplitude, and so constraining GPDs. Comparing data from Time-like Compton Scattering and the space-like DVCS process will also allow for testing the universality of GPDs. First studies of TCS using real tagged and quasi-real untagged photons were carried out at Jefferson Lab 6 GeV. In this talk, preliminary results on asymmetries and extraction of the real part of the CFF using photoproduction data and a comparison to electroproduction data will be presented. We will also discuss future plans for dilepton production at Jefferson Lab 12 GeV. Supported in part by NSF Grant PHY-1306227.
Nanowire Electron Scattering Spectroscopy
NASA Technical Reports Server (NTRS)
Hunt, Brian; Bronikowsky, Michael; Wong, Eric; VonAllmen, Paul; Oyafuso, Fablano
2009-01-01
Nanowire electron scattering spectroscopy (NESS) has been proposed as the basis of a class of ultra-small, ultralow-power sensors that could be used to detect and identify chemical compounds present in extremely small quantities. State-of-the-art nanowire chemical sensors have already been demonstrated to be capable of detecting a variety of compounds in femtomolar quantities. However, to date, chemically specific sensing of molecules using these sensors has required the use of chemically functionalized nanowires with receptors tailored to individual molecules of interest. While potentially effective, this functionalization requires labor-intensive treatment of many nanowires to sense a broad spectrum of molecules. In contrast, NESS would eliminate the need for chemical functionalization of nanowires and would enable the use of the same sensor to detect and identify multiple compounds. NESS is analogous to Raman spectroscopy, the main difference being that in NESS, one would utilize inelastic scattering of electrons instead of photons to determine molecular vibrational energy levels. More specifically, in NESS, one would exploit inelastic scattering of electrons by low-lying vibrational quantum states of molecules attached to a nanowire or nanotube.
NASA Astrophysics Data System (ADS)
Xie, Ya-Ming; Ji, Xia
Nowadays, with the development of technology, particles with size at nanoscale have been synthesized in experiments. It is noticed that anisotropy is an unavoidable problem in the production of nanospheres. Besides, nonspherical nanoparticles have also been extensively used in experiments. Comparing with spherical model, spheroidal model can give a better description for the characteristics of nonspherical particles. Thus the study of analytical solution for light scattering by spheroidal particles has practical implications. By expanding incident, scattered, and transmitted electromagnetic fields in terms of appropriate vector spheroidal wave functions, an analytic solution is obtained to the problem of light scattering by spheroids. Unknown field expansion coefficients can be determined with the combination of boundary conditions and rotational-translational addition theorems for vector spheroidal wave functions. Based on the theoretical derivation, a Fortran code has been developed to calculate the extinction cross section and field distribution, whose results agree well with those obtain by FDTD simulation. This research is supported by the National Natural Science Foundation of China No. 91230203.
NASA Technical Reports Server (NTRS)
Choi, Hyun-Joo; Chun, Hye-Yeong; Gong, Jie; Wu, Dong L.
2012-01-01
The realism of ray-based spectral parameterization of convective gravity wave drag, which considers the updated moving speed of the convective source and multiple wave propagation directions, is tested against the Atmospheric Infrared Sounder (AIRS) onboard the Aqua satellite. Offline parameterization calculations are performed using the global reanalysis data for January and July 2005, and gravity wave temperature variances (GWTVs) are calculated at z = 2.5 hPa (unfiltered GWTV). AIRS-filtered GWTV, which is directly compared with AIRS, is calculated by applying the AIRS visibility function to the unfiltered GWTV. A comparison between the parameterization calculations and AIRS observations shows that the spatial distribution of the AIRS-filtered GWTV agrees well with that of the AIRS GWTV. However, the magnitude of the AIRS-filtered GWTV is smaller than that of the AIRS GWTV. When an additional cloud top gravity wave momentum flux spectrum with longer horizontal wavelength components that were obtained from the mesoscale simulations is included in the parameterization, both the magnitude and spatial distribution of the AIRS-filtered GWTVs from the parameterization are in good agreement with those of the AIRS GWTVs. The AIRS GWTV can be reproduced reasonably well by the parameterization not only with multiple wave propagation directions but also with two wave propagation directions of 45 degrees (northeast-southwest) and 135 degrees (northwest-southeast), which are optimally chosen for computational efficiency.
Optical parametrically gated microscopy in scattering media
Zhao, Youbo; Adie, Steven G.; Tu, Haohua; Liu, Yuan; Graf, Benedikt W.; Chaney, Eric J.; Marjanovic, Marina; Boppart, Stephen A.
2014-01-01
High-resolution imaging in turbid media has been limited by the intrinsic compromise between the gating efficiency (removal of multiply-scattered light background) and signal strength in the existing optical gating techniques. This leads to shallow depths due to the weak ballistic signal, and/or degraded resolution due to the strong multiply-scattering background – the well-known trade-off between resolution and imaging depth in scattering samples. In this work, we employ a nonlinear optics based optical parametric amplifier (OPA) to address this challenge. We demonstrate that both the imaging depth and the spatial resolution in turbid media can be enhanced simultaneously by the OPA, which provides a high level of signal gain as well as an inherent nonlinear optical gate. This technology shifts the nonlinear interaction to an optical crystal placed in the detection arm (image plane), rather than in the sample, which can be used to exploit the benefits given by the high-order parametric process and the use of an intense laser field. The coherent process makes the OPA potentially useful as a general-purpose optical amplifier applicable to a wide range of optical imaging techniques. PMID:25321724
Quirk, Thomas, J., IV
2004-08-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
Rutherford scattering of electron vortices
NASA Astrophysics Data System (ADS)
Van Boxem, Ruben; Partoens, Bart; Verbeeck, Johan
2014-03-01
By considering a cylindrically symmetric generalization of a plane wave, the first-order Born approximation of screened Coulomb scattering unfolds two new dimensions in the scattering problem: transverse momentum and orbital angular momentum of the incoming beam. In this paper, the elastic Coulomb scattering amplitude is calculated analytically for incoming Bessel beams. This reveals novel features occurring for wide-angle scattering and quantitative insights for small-angle vortex scattering. The result successfully generalizes the well-known Rutherford formula, incorporating transverse and orbital angular momentum into the formalism.
Rainbow scattering in nuclear collisions
Berezhnoi-breve, Y.A.; Kuznichenko, A.V.; Onishchenko, G.M.; Pilipenko, V.V.
1987-03-01
The evolution of ideas about the rainbow phenomenon resulting from the refraction and reflection of light in water drops is briefly reviewed. The rainbow scattering of particles in quantum mechanics is treated on the basis of the semiclassical approximation, and the nuclear and Coulomb ''rainbows'' are discussed. Rainbow scattering of light ions by nuclei at energies Eapprox. >25--30 MeV/nucleon is considered. The results of theoretical analysis of experimental data on rainbow scattering are presented. The behavior of the nuclear part of the scattering phase shift deduced from experiment is discussed. The manifestation of rainbow scattering in quasielastic nuclear processes is considered.
Device for removing blackheads
Berkovich, Tamara
1995-03-07
A device for removing blackheads from pores in the skin having a elongated handle with a spoon shaped portion mounted on one end thereof, the spoon having multiple small holes piercing therethrough. Also covered is method for using the device to remove blackheads.